Tag: Artificial intelligence

    Dangerous technology trends to be aware of

    What are the 7 most dangerous technology trends in 2020?

    The latest technologies bring convenience, admiration and a lot of positive emotions into our lives, but we must be vigilant about the possible negative consequences of using innovation. Here is a list of the 7 most dangerous technology trends that have already shown their dark side.

    Fake news

    «Propaganda – it is the art of photographing a devil without hooves and horns».

    Hans Kasper (1916 – 1990) − German writer and radio show writer. 

    The number of programs like GROVER is growing steadily in the world media. − an artificial intelligence system capable of writing a fake news article on any topic. Such programs generate more believable articles than copywriters do. The secret is that AI processes large amounts of data and backs up its articles with facts that a person cannot always even guess about..

    The most successful in this direction was the non-profit company OpenAI, supported by Elon Musk. The results of OpenAI’s performance are so good that the organization initially decided not to publish the research results publicly to prevent dangerous misuse of the technology. The main danger of believable fake news is that its quality can deceive even critically-minded people who are not brainwashed by propaganda..

    Drone swarms

    The most high-profile example of the negative use of drones was their recent attack on oil platforms in Saudi Arabia, which caused hundreds of billions of dollars in damage. As a result of the attack, the world’s oil reserves were depleted by about 5%, but the convenience of using drones for military operations suggests that such situations will continue to occur..

    A swarm of drones can organize to achieve a goal by interacting with each other, creating a new type of ultra-dangerous weapon. At the moment, the technology is still at the experimental stage, but the reality of the swarm, which can coordinate its behavior to achieve the most difficult military tasks, is approaching reality..

    Spying on smart home devices

    To devices «smart home» able to respond to inquiries and be as helpful as possible, they should be equipped with microphones to listen to voice commands. When a person installs a smart speaker in his room, he also signs that now a scrupulous little spy lives in his house, who does not miss a single word..

    All smart devices collect information about habits and preferences, residence and travel routes, arrival and departure times. This information makes life more convenient, but there is also the possibility of data abuse. Thieves and crooks are actively working on tools that will allow them to take over all the information they collect. In the event of a successful hacking of personal data from cloud servers Amazon, Google, Yandex or any other artificial intelligence platform, the attackers will receive all the information they need to blackmail or steal real things from home..

    One of the loudest scandals in this direction was the wiretapping by Amazon employees of the recordings of their customers’ conversations with the Amazon Echo smart speaker this spring. Obviously, Amazon employees should not use the information received for personal gain, but no one can guarantee this..

    Face recognition thanks to video cameras

    Smartphone maker Huawei was accused last year of using facial recognition technologies for surveillance and racial profiling, and handing over access keys to foreign networks to Chinese intelligence. 

    Millions of cameras on smartphones and laptops are used to track and recognize people not only in the PRC – this practice has been seen in almost every country in the world. The only difference is that somewhere it was possible to prove it (the Snowden case in the USA), and somewhere – not.

    Dangerous technology trends to be aware of

    AI cloning

    Artificial intelligence is capable of generating phrases with the voice of any person. For this, just one piece of audio recording with his voice is enough. Similarly, AI can create a video of anyone that looks natural and believable. The negative consequences of such videos and audio recordings are widespread, from hacking bank accounts to blackmail and political scandals.. 

    Deepfake uses face mapping, machine learning, and artificial intelligence to create behaviors. Different forms of a person’s face act as the initial data for generating a fake, which are easier to obtain from a large array of records. Previously, this kind of open source data was only reserved for celebrities, but social media has changed that. Now ordinary people download gigabytes of amateur videos with them, providing the masters from Deepfake with all the necessary sources for processing. 

    Most successful in identifying fake videos copes with a program created at the University of Berkeley.

    Viruses and phishing combined with AI

    Artificial intelligence greatly simplifies the work of phishing networks, which need an effective tool to automate and scale the volume of work. AI helps attackers better find «weak» email addresses and accounts on social networks, more accurately compose the texts of viral fishing tackle letters, bypass anti-virus software more efficiently and, most importantly, collect money in an automated mode without the risk of being tracked by law enforcement agencies. In recent years, withdrawals of funds have become much more often carried out using cryptocurrencies..

    Smart dust

    Microelectromechanical systems (MEMS) are microscopic in size down to grit. The smallest piece of smart dust is called a moto. Mot – This is a sensor that has its own computing unit, sensors, power supply and data transmission systems. Motes can combine with other specks of dust to form what is called smart dust. Such systems are already being used on an industrial scale by cybercriminals when it is necessary to hide surveillance as much as possible..

    Dangerous technology trends to be aware of
    Dangerous technology trends to be aware of

    NEWS

    Chinese developers start to use AI to identify idlers and monitor security at the site

    How China Tracks Everyone

    Construction companies in China begin to implement artificial intelligence systems to monitor worker behavior and site situation.

    Most projects are completed on a tight schedule, and in times of rush, disruptions and accidents are common. In some large cities, more people die on construction sites than in other industries. Therefore, the Chinese decided to automate control.

    According to a report from the Beijing Institute of Automation, the new monitoring technology monitors what is happening through video cameras and analyzes the behavior of each employee, defining his personality by biometric parameters. Real-world tests confirm that AI system improves productivity and safety on the construction site.

    Algorithms are capable of detecting different types of activity: work, staggering around, smoking and using a smartphone. In addition, the system records and notifies about safety violations, for example, if an employee forgot to put on a helmet, entered a restricted area, or behaves aggressively..

    Chinese developers start to use AI to identify idlers and monitor security at the site

    Research by McKinsey shows that artificial intelligence technologies are being introduced very slowly in the construction business due to weak growth rates.. 

    We’ve also previously written about how artificial intelligence will redesign by 2050..

    text: Ilya Bauer, photo: Academy of Sciences of the People’s Republic of China

    Chinese developers start to use AI to identify idlers and monitor security at the site

    NEWS

    Categories: Litecoin LTC Tags: Tags: