RUS | ENG | All
Enter the email or login, that you used for registration.
If you do not remember your password, simply leave this field blank and you will receive a new, along with a link to activate.

Not registered yet?
Welcome!

2024-07-03 06:10:00

AI AS A THREAT

AI AS A THREAT

SCI AM - MAY 6, 2024 - AI Doesn’t Threaten Humanity. Its Owners Do

BY 

We shouldn’t be afraid of AI taking over humanity; we should fear the fact that our humanity hasn’t kept up with our technology.

In April a lawsuit revealed that Google Chrome’s private browsing mode, known as “Incognito,” was not actually as private as we might think. Google was still collecting data, which it has now agreed to destroy, and its “private” browsing does not actually stop websites or your Internet service provider, such as Comcast or AT&T, from tracking your activities.

In fact, that information harvesting is the whole business model of our digital and smart-device-enabled world. All of our habits and behaviors are monitored, reduced to “data” for machine learning AI, and the findings are used to manipulate us for other people’s gains.

It doesn’t have to be this way. AI could be used more ethically for everyone’s benefit. We shouldn’t fear AI as a technology. We should instead worry about who owns AI and how its owners wield AI to invade privacy and erode democracy.

No surprise, tech companies, state entities, corporations and other private interests increasingly invade our privacy and spy on us. Insurance companies monitor their clients’ sleep apnea machines to deny coverage for improper use. Children’s toys spy on playtime and collect data about our kids. Period tracker apps share with Facebook and other third parties (including state authorities in abortion-restricted states) when a woman last had sex, their contraceptive practices, menstrual details and even their moods. Home security cameras surveil customers and are susceptible to hackers. Medical apps share personal information with lawyers. Data brokers, companies that track people across platforms and technology, amplify these trespasses by selling bundled user profiles to anyone willing to pay.

This explicit spying is obvious and feels wrong at a visceral level. What’s even more sinister, however, is how the resulting data are used—not only sold to advertisers or any private interest that seeks to influence our behavior, but deployed for AI training in order to improve machine learning. Potentially this could be a good thing. Humanity could learn more about itself, discovering our shortcomings and how we might address them. That could assist individuals in getting help and meeting their needs.

Instead, machine learning is used to predict and prescribe, that is, estimate who we are and the things that would most likely influence us and change our behavior. One such behavior is how to get us to “engage” more with technology and generate more data. AI is being used to try and know us better than we know ourselves, get us addicted to technology, and impact us without our awareness, consent or best interest in mind. In other words, AI is not helping humanity address our shortcomings, it’s exploiting our vulnerabilities so private interests can guide how we think, act and feel.

A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends.

So when we use our phones, computers, home security systems, health devices, smart watches, cars, toys, home assistants, apps, gadgets and what have you, they are also using us. As we search, we are searched. As we narrate our lives on social media, our stories and scrolling are captured. Despite feeling free and in control, we are subtly being guided (or “nudged” in benevolent tech speak) towards constrained ideas and outcomes. Based on previous behaviors, we are offered a flattering and hyperindividualized world that amplifies and confirms our biases, using our own interests and personalities against us to keep us coming back for more. Employing AI in this manner might be good for business, but it’s disastrous for the empathy and informed deliberations required for democracy.

Even as tech companies ask us to accept cookies or belatedly seek our consent, these efforts are not done in good faith. They give us an illusion of privacy even as “improving” the companies’ services relies on machines learning more about us than we know ourselves and finding patterns to our behavior that no one knew they were looking for. Even the developers of AI don’t know how exactly it works, and therefore can’t meaningfully tell us what we’re consenting to.

Under the current business model, the advances of AI and robot technology will enrichen the few while making life more difficult for the many. Sure, you could argue that people will benefit from the potential advances (and tech industry-enthralled economists undoubtedly will so argue) in health, design and whatever efficiencies AI might bring. But this is less meaningful when people have been robbed of their dignity, blamed for not keeping up and continuously spied on and manipulated for someone else’s gain.

We shouldn’t be afraid of AI taking over humanity; we should fear the fact that our humanity hasn’t kept up with our technology. Instead of enabling a world where we work less and live more, billionaires have designed a system to reward the few at the expense of the many. While AI has and will continue to do great things, it’s also been used to make people more anxious, precarious and self-centered, as well as less free. Until we truly learn to care about one another and consider the good of all, technology will continue to ensnare and not emancipate us. There’s no such thing as artificial ethics, and human principles must guide our technology, not the other way around. This starts by asking about who owns AI and how it might be employed in everyone’s best interest. The future belongs to us all.

-----


Earlier:

AI AS A THREAT
2024, June, 26, 06:35:00
LATEST ENERGY TRENDS
What are the latest trends in the power and utilities sector?
AI AS A THREAT
2024, May, 14, 06:45:00
DIGITAL ENERGY DEMAND GROWTH
As digital technologies evolve, so too does their appetite for energy. This correlation is particularly evident in the realm of data centers, which are increasingly burdened by the demands of artificial intelligence (AI).
AI AS A THREAT
2024, May, 3, 06:25:00
DIGITAL TWINS & AI
At its core, a Digital Twin is not merely a simulation but a dynamic digital replica intricately entwined with real-world phenomena.
AI AS A THREAT
2024, February, 28, 06:40:00
DIGITAL ENERGY TRANSFORMATION
Digitalization in the energy sector entails automating business processes and workflows, resulting in increased efficiency and cost reduction.
AI AS A THREAT
2024, February, 13, 06:30:00
DIGITAL ENERGY REVOLUTION
By leveraging digital modeling technology, utilities can anticipate challenges, optimize performance, and seamlessly integrate renewable energy sources into the grid.
AI AS A THREAT
2024, January, 29, 06:15:00
U.S. ARTIFICIAL INTELLIGENCE
Divided into four operational areas—NAIRR Open, NAIRR Secure, NAIRR Software, and NAIRR Classroom—the NAIRR pilot is bringing private sector, non-profit and philanthropic organizations, industry partners, and nine Federal agencies together to provide access to advanced computing, datasets, models, software, training, and user support to U.S. based researchers and educators.
AI AS A THREAT
2023, November, 30, 06:25:00
ARTIFICIAL INTELLIGENCE FOR ENERGY SYSTEMS
Next generation AI solutions are quickly making their way from the test labs into utilities. These new tools will make it simpler for mobile service teams and remote employees to complete their work, boosting productivity and improving energy company efficiency.
All Publications »
Tags: AI, DIGIT