New Artificial Intelligence Designed to Be Mentally Unstable: What Could Go Wrong?

By Jake Anderson

We tend to think of artificial intelligence entities as flawless intellects, early prototypes of the powerful ‘artilects’ futurists imagine will one day rule our world. We also tend to think of them as not being subject to unhappy thoughts or feelings. But one company has created an artificially intelligent machine-learning system that suffers from mental instability, or the AI equivalent, and the creators deliberately designed it to be unstable.

This tortured artist of an AI is called DABUS, short for “device for the autonomous bootstrapping of unified Sentience.” It was created by computer scientist Stephen Thaler, who used a technique called “generative adversarial networks” to mimic the extreme fluctuations in thought and emotion experienced by humans who suffer from mental illness. His Missouri-based company, Imagination Engines, developed a two-module process: Imagitron infused digital noise into a neural network, causing DABUS to generate new ideas and content; then, a second neural network, Perceptron, was integrated to assess DABUS’s output and provide feedback. Then they added their secret sauce.

This method of creating an echo chamber between neural networks is not new or unique. However, what Thaler and his company are using it for — deliberately tweaking an AI’s cognitive state to make its artistic output more experimental — is. Their process triggers ‘unhappy’ associations and fluctuations in rhythm. The result is an AI interface with symptoms of insanity.

“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says, describing DABUS’s faculties and temperament. “At the other, we have reduced cognitive flow and depression.”

Thaler believes that integrating human-like problem-solving — and human-like flaws, such as mental illness — may significantly enhance an AI’s ability to create innovative artwork and subjective output. While everyone is familiar with the psychedelic and surreal canvases produced by Google’s Deep Dream algorithm, they may be uniquely impressed by the more measured and meditative work of DABUS.

Above: a few of DABUS’s surreal pieces, born of neural networks

Thaler also believes this technique will improve the abilities of AI in stock market predictions and autonomous robot decision-making. But what are the risks to infusing mental illness into a machine mind? Thaler believes there are limits but that psychological problems could be just as natural to AI as they are to humans.

“The AI systems of the future will have their bouts of mental illness,” Thaler speculates. “Especially if they aspire to create more than what they know.”

Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Here Is a Strategy For Making Yourself Irreplaceable Before Artificial Intelligence Learns to do Your Job

  •  Photo: Zoranm | Getty Images

Photo: Zoranm | Getty Images

Image of/

Caption

Close

Image of

Photo: Zoranm | Getty Images

Here Is a Strategy For Making Yourself Irreplaceable Before Artificial Intelligence Learns to do Your Job

 / 

Back to Gallery

The robots are coming! They’re not coming to destroy us, they will take some jobs — and if it’s your job, that’s nearly as frightening.

We’re no strangers to seeing jobs replaced by automation. Despite various claims that we’re losing trade jobs to China and Mexico, the majority of lost manufacturing jobs in the United States — which a Ball State study estimates to be 87 percent — are due to increased productivity and efficiency (i.e., better machines and automation).

But, the losses aren’t going to stop there. One PwC study found that by the early 2030s, approximately 38 percent of all United States jobs could be replaced by artificial intelligence (AI) and automation. That’s higher than the U.K. (30 percent), Germany (35 percent) or Japan (21 percent), and for many Americans, much too high for comfort. Other futurists are even more optimistic (or pessimistic, depending on how you consider it); venture capitalist Kai-Fu Lee believes 50 percent of jobs will be replaced within the next decade.

Regardless of where you find yourself on the spectrum of prognostication, it’s certain that AI and robotics are going to replace many jobs within the next couple of decades, including white-collar jobs previously thought impossible to automate. So, in this world of advanced AI, what can you do to make yourself irreplaceable?

Know your industry.

First, you need to understand your industry and how AI could affect it. Some will be more heavily impacted than others and each will be affected in a different way. The Future of Work community outlines five industries that are going to be most heavily impacted by automation:

1. Medicine/healthcare. AI in healthcare is already being used to crunch big data and provide better diagnoses for patients — it’s also being used for more precise surgeries (such as with the Smart Tissue Autonomous Robot — STAR).

2. Manufacturing. Manufacturing jobs have been on steady decline for the past several decades as we’ve gradually advanced the we use to manufacture. AI will accelerate that trend even further.

3. Transportation. Self-driving cars are the big advancement here. Frontrunners like Tesla and Waymo are already testing early rider programs, and Waymo cars have driven more than three million miles by themselves.

4. Customer service. Customer service roles are starting to be replaced as customer service tech improves natural language recognition and conversational capacity.

5. Finance. Robo-advisors, like those from Wealthfront and Betterment, are starting to step in and replace traditional human advisors. Considering the lack of trust the public has in the sector, there’s going to be little opposition to this trend.

If you find yourself in one of these industries, and feel your role is replaceable, you could be proactive and start preparing for a change to a different, less vulnerable industry. For now, creative roles and ones that require personal interactions (such as design, writing, event planning and PR managers) are on the safer end of the spectrum.

Focus on ideas and management.

If a process can be documented and modeled, it can be automated. That used to apply to highly repetitive tasks only, but today’s advanced AI can tackle unbelievably sophisticated tasks — as long as they’re bound by a set of rules.

For example, Google’s DeepMind can beat the best human Go players because there’s a clear victory condition and clear rules dictating the game — even though the game itself is astronomically complex. In fact, their new version, AlphaGo Zero, not only taught itself to play, but was able to beat the old AlphaGo 100:0.

Where AI fails is in generating novel ideas, thinking in abstract concepts and providing overall direction. If you spend your career following the rules and doing repetitive tasks, you’re going to be replaceable. Instead, focus on bringing forward new ideas, setting direction and collaborating with others on big-picture work; machines won’t be able to touch you for many years.

Build relationships.

Even if they can mimic it, machines don’t have empathy. They don’t have personalities, and they can’t build relationships. If your bosses can replace you with a machine, they still may not want to if it means sacrificing you, the person, in that role. If your role depends on your ability to connect with other people (such as in sales or HR), you may not be replaceable at all. Accordingly, you should spend significant time improving your relationship-building and relationship management skills, which machines don’t currently have the capacity to master.

Related: Artificial Intelligence Is Likely to Make a Career in Finance, Medicine or Law a Lot Less Lucrative

Learn to adapt.

Finally, and most importantly, be prepared to adapt. Though high-level projections anticipate jobs being “replaced” by AI, it’s more likely that they’ll be displaced or augmented. Instead of your bosses firing you and putting a machine in your place, they’ll probably promote you and put you in charge of operating the machine that did what you used to do.

Of course, not everyone will be cut out for this transformation. Some people will be reluctant to change, others will have a hard time learning the new technology. If you want to survive in the job market, you need to be better than that. You need to be flexible and roll with the punches. Learn everything you can about the new tech in your industry and stay open to multiple career options.

Are the machines coming for your job? Almost certainly — but that’s no reason to fear. Machines will probably take your job, or at least make the attempt, but that just means you can transform your job, work alongside the machines or find a new role entirely.

The term “luddite,”

now used to refer to someone reluctant to accept new technology, originally referred to English textile workers who were afraid that weaving machines would take their jobs. In retrospect, those fears of a now-primitive technological advancement seem ludicrous. Perhaps, a few decades from now, we’ll feel the same way about the AI revolution.

Copyright 2017 Entrepreneur.com Inc., All rights reserved

This article originally appeared on entrepreneur.com

Source

http://stamfordadvocate.com/news/article/Here-Is-a-Strategy-For-Making-Yourself-12365232.php

Can algorithmic justice save us from the coming artificial intelligence dystopia?

I recently learned about an emerging field, algorithmic justice, from two friends I met in grad school. Algorithmic justice is a movement to address algorithmic bias in technology and artificial intelligence, which, as defined by the Algorithmic Justice League, is “like human bias [and] can result in exclusionary experiences and discriminatory practices.” As a self-proclaimed community organizer in our digital age, this piqued my concerns and interest on how AI is relevant to feminism and other social justice movements. 

Artificial intelligence—in contrast to natural intelligence that humans and animals have—is the capacity for a device to perceive its surrounding environment and take action to achieve a specific goal. AI is romanticized and dramatized in Hollywood films, yet its development is not mythical or a concern of the future—it is infiltrating our daily lives already. And without the development of ethical standards, AI has a tendency to exacerbate our pre-existing biases around race, gender, class, health, and so much more.

Zeynep Tufecki, self-proclaimed techno-sociologist, suggests in her recent TEDTalk that AI is building a dystopia “one click at a time.” I’m sure many of us have had social media ads follow us around for a week or have been shocked when a Facebook ad for something we didn’t realize we wanted appears and entices us to purchase it. At first glance, this use of digital technologies just seems like a contemporary version of TV commercials and subway ads, but Tufecki explains that it means so much more.

Persuasion architectures, which are mapping processes to understand consumers’ purchasing patterns in relation to sales patterns on a website, identify a person’s weaknesses based on their digital behavior and social media consumption. Every search, click, status, and photo upload is data that is then used to create heuristics, or characteristics, about us and make assumptions about our interests and future behaviors. This form of data collection is ripe to use for discrimination.

Marketing companies and electoral organizers have long used assumptions about a community’s behavior in a variety of ways. That is arguably a good thing when you’re able to cut voter turf based on people’s likelihood to vote when it’s GOTV weekend and you can only knock on so many doors. Cathy O’Neil, mathematician and author of Weapons of Math Destruction, recognizes how vital tailoring messages is when mobilizing others is but cautions us: “What’s efficient for campaigns is inefficient for democracies.”

Let’s take flight information as an example. Tufecki explains that as machine learning absorbs every click, comment, search, and purchase, the system makes characteristics based on each of us to then predict future behaviors—like, for example, if a person is likely to purchase a ticket to Las Vegas. It’s possible for machine learning to predict that people who are bipolar and entering a manic phase are likely to be prime customers given predicted spending and gambling habits, and for companies to therefore target people based on specific trends in their online behavior.

Tufecki anecdotally shares the story of a computer scientist who was faced with a decision about whether to exploit a persuasion architecture. He successfully tested the possibility of diagnosing mania based on users’ social media posts before a clinical diagnosis was made. This is concerning because the computer scientist didn’t fully understand how or why this test worked due to the natural limitations of understanding technology. And it is troubling that there is no regulation on how to use this vital information. In one scenario this tool could be used to identify who is likely in a manic state in order to connect them to mental health services, or alternatively it could be used to target them for flight purchases.

Algorithms also tend to perpetuate echo chambers and control voting habits on the internet. After you first search for a video on YouTube, the suggestion sidebar will provide you with increasingly polarized videos. So if a person is watching a video about anti-choice activists or alt-right marches, the succeeding videos will morph into an echo chamber of extremist ideologies. The same is true for progressive viewpoints. By creating this build up, YouTube keeps viewers on their site longer than intended and the lack of diversity in the video suggestions easily perpetuates political polarization.

Algorithms aren’t only tracking our individual social media habits for capitalist purposes, but are also increasingly used to digitize court proceedings and the criminal justice system. Recently there have been pilot programs that assign criminal defendants a computerized risk score to assess the likelihood of recidivism, which is based on previous court decisions and the current prison population. As Michelle Alexander’s The New Jim Crow and Ava DuVernay’s Thirteenth have shown, the criminal justice system is biased against people of color, especially Black and Latino defendants. Data can’t be neutral given that the methods used to collect data and the people themselves who input data are biased, and are therefore part of a larger systemic problem of racism.

In October, Kate Crawford and Meredith Wittaker, cofounders of AI Now, published a policy report on the urgent need to create ethical standards to manage the use and impact of artificial intelligence. They wrote, “New ethical frameworks for AI need to move beyond individual responsibility to hold powerful industrial, governmental and military interests accountable as they design and employ AI.” In her interview with Wired.com, Crawford points out that “when we talk about ethics, we forget to talk about power.” Understanding power would help to understand the underlying systemic injustice rooted in the ways algorithms are used to exploit people’s behaviors and continue inequity.

The Algorithmic Justice League, led by Joy Buolamwini, provides us with hope as they continue to develop concrete steps for activists, artists, coders, companies, academics, and legislators to work together to create and implement ethical standards with an anti-oppressive lens on power. This weekend, November 17-19, Data for Black Lives will be livestreaming their conference, which will interrogate many of the concerns brought up by Tufecki, O’Neil, Crawford, and Wittaker. As activists committed to mental health, racial justice, and feminism, it is important we stay connected to the ways that our digital age is directly impacting our personal lives and the communities we tirelessly are fighting alongside.

Header image via

Source

http://feministing.com/2017/11/17/can-algorithmic-justice-save-us-from-the-coming-artificial-intelligence-dystopia/

Humans vs Robots: Artificial Intelligence Beats Top Pilot in NASA and Google Drone Race

In another addition to the ever-expanding list of things robots can do better than humans, artificial intelligence has beaten one of NASA’s world-class pilots in a drone race.

Researchers at NASA’s jet propulsion lab in Pasadena, California, revealed Tuesday the results of two years spent developing algorithms for autonomous drones using technology also used for spacecraft navigation, funded by Google. The space agency put its AI to the test on October 12, finding that their robot was nimbler and did not get tired like a human pilot.

The race, held on October 12, pitted NASA drone pilot Ken Loo against custom-built drones named after comic book characters Batman, Joker and Nightwing, capable of reaching speeds of up to 80 miles per hour. Over the course of the race, factors such as natural human aggression and fatigue gave the robots the advantage.

“We pitted our algorithms against a human, who flies a lot more by feel,” Rob Reid, the project’s task manager. “You can actually see that the AI flies the drone smoothly around the course, whereas human pilots tend to accelerate aggressively, so their path is jerkier.”

11_23_Drone_Race Engineers recently finished developing three drones and the artificial intelligence needed for them to navigate an obstacle course by themselves. As a test of these algorithms, they raced the drones against a professional human pilot. NASA/JPL-Caltech

To start with, both Loo and the A.I. achieved similar lap times but as they continued the robot drones learned the course and got faster whereas the human pilot slowed down due to mental exhaustion. “Our autonomous drones can fly much faster,” Reid said. “One day you might see them racing professionally.”

On the official lap times Loo was faster, he averaged 11.1 seconds, compared to the autonomous drones that averaged 13.9 seconds. But the drones were more consistent overall. Where Loo’s times varied more, the AI was able to fly the same racing line every lap.

“This is definitely the densest track I’ve ever flown,” Loo said. “One of my faults as a pilot is I get tired easily. When I get mentally fatigued, I start to get lost, even if I’ve flown the course 10 times.”

However, it wasn’t all bad news for team humanity. The drones sometimes moved too fast for themselves, meaning that a motion blur caused them to lose track of their surroundings. Loo was also able to do more impressive aerial acrobatics, such as corkscrews, which flummoxed the drones.

Robots have already started replacing humans in many walks of life. As many as 7.5 million retail jobs that don’t require a great deal of human analysis are at risk of becoming automated, according to CNN, as are jobs for drivers because some of the world’s biggest companies are investing billions of dollars to develop self-driving vehicles.

According to The Guardian, 72 percent of Americans are very or somewhat worried about a future where robots and computers are capable of performing many human jobs. Only a third of people said they were excited by the prospect.

Source

http://www.newsweek.com/humans-vs-robots-artificial-intelligence-beats-top-pilot-nasa-and-google-drone-720769

Elon Musk warns there’s only ‘a 5 to 10% chance’ that artificial intelligence won’t kill us all » TechWorm

Elon Musk warns there’s only ‘a 5 to 10% chance’ that artificial intelligence won’t kill us all

Artificial Intelligence will destroy humankind, cautions Elon Musk

Elon Musk, the founder of SpaceX and Tesla, who has voiced his fears time and again regarding artificial intelligence (AI) being a threat to humanity, warned that we only have ‘a 5 to 10% chance’ of stopping killers robots from destroying humankind, in a recent talk with his employees at his neuro-technology company, Neuralink Inc., according to Rolling Stone.

According to Musk who is famous for his futuristic claims, said that people have almost no chance to create a completely safe AI. He claimed that the chances of making AI safe is only 5-10%, but the probability of creating dangerous robots increases every year. Musk like many of his peers is supporting serious regulation of AI, and as soon as possible.

Musk’s latest claims follow a warning he made in July that regulation of AI is required because it’s a “fundamental risk to the existence of human civilization.”

He said, “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.”

In 2014, Musk had predicted AI was on the verge of something “seriously dangerous.” Last year, he declared humans had basically lost the battle against AI already, and that the only way to beat them was to join them.

“Under any rate of advancement in AI, we will be left behind by a lot,” Musk said. “The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most. The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet or a house cat.”

In order to combat this, Musk proposed that we need to be prepared by developing our natural intelligence to the next level. Their startup Neuralink is working on a project neural lace, wherein tiny electrodes would be implanted into the brain which in turn would manage functions like memory, with the eventual possibility of uploading and downloading thoughts on a computer. The ultimate goal to use this technology is to enhance memory function and provide more direct interaction between human and computer interfaces or give humans added artificial intelligence.

In order to fully understand the AI risks, governments must have a better understanding of AI technology’s rapid evolution, he said. “Once there is awareness, people will be extremely afraid, as they should be… By the time we are reactive in AI regulation, it’ll be too late,” he told at the summer conference of the National Governors Association in Rhode Island.

In September, Musk said that AI could be the reason for World War III, likely a war that will involve the killer robots he warned about in August.