Companies cheat with services of fake artificial intelligence

Instead of algorithms, real people do the services they represent as a work of artificial intelligence

Artificial Intelligence is a hot topic in the world of technology, and it’s going to be such a long time. At the moment, any company in the field would like to be among the first to offer an effective segment of artificial intelligence. Some companies want it so much they’re ready for fraud, says Guardian. One of the popularity scams is the presentation of artificial intelligence services, which are actually done by very real people.

The topic came to attention after The Wall Street Journal published information that companies offer a variety of services to improve Gmail’s performance, such as smart answers. In the case of Edison Software, for example, she did not mention in her service that real people would also review the letters of the users to select or check the smart answers that are being prepared by her artificial intellect.

There are many other examples. Spinvox, for example, has been accused years ago that its voice messaging software in text is actually people who call the words by hand. The services of assistant bosses X.ai and Clara were also accused of actually being 12-hour shifts. Expensify also acknowledged last year that at least part of the cash notes handling its software are actually being imported by people.

In some cases, fraud is not targeted. Some companies, for example, use real people whose activities are used to teach artificial intelligence itself. The problem is that they fail to mention this detail to users who think nobody is reading their data. In other cases, however, the “deceive until you succeed” method is used and people are deliberately used while the artificial intelligence actually becomes capable of the service presented.

Some companies do not see a problem with this. “You simulate what ultimate experience is, and when it comes to artificial intelligence, there is a person behind the curtain instead of an algorithm,” said Alison Darcy, founder of Woebot bot-psychologist. She adds that much money and effort is needed to create good artificial intelligence, and sometimes companies want to see if they have enough interest in their service before investing. However, Darcy agrees that the lack of transparency damages the technology and creates distress and distrust among people.

In fact, this makes people a tool for a prototype of artificial intelligence, says Gregory Kobergge, CEO of ReadMe.

There remains the question of how ethical this is. A study by scientists from the University of Southern California shows that people tend to share and reveal more when they think they interact with a machine than with a person, especially when it comes to digital services.

Recently, Google tested this problem firsthand with its new Google Duplex service. It simulates real phone operators to accept queries, bookings, and answer basic questions. The behavior of artificial intelligence is so realistic that it has triggered people’s displeasure. Google has therefore announced that the service will be identified in the future before the users who interact with it to know that they are not talking to a real person.