What are the 7 most dangerous technology trends in 2020?
The latest technologies bring convenience, admiration and a lot of positive emotions into our lives, but we must be vigilant about the possible negative consequences of using innovation. Here is a list of the 7 most dangerous technology trends that have already shown their dark side.
«Propaganda – it is the art of photographing a devil without hooves and horns».
Hans Kasper (1916 – 1990) − German writer and radio show writer.
The number of programs like GROVER is growing steadily in the world media. − an artificial intelligence system capable of writing a fake news article on any topic. Such programs generate more believable articles than copywriters do. The secret is that AI processes large amounts of data and backs up its articles with facts that a person cannot always even guess about..
The most successful in this direction was the non-profit company OpenAI, supported by Elon Musk. The results of OpenAI’s performance are so good that the organization initially decided not to publish the research results publicly to prevent dangerous misuse of the technology. The main danger of believable fake news is that its quality can deceive even critically-minded people who are not brainwashed by propaganda..
The most high-profile example of the negative use of drones was their recent attack on oil platforms in Saudi Arabia, which caused hundreds of billions of dollars in damage. As a result of the attack, the world’s oil reserves were depleted by about 5%, but the convenience of using drones for military operations suggests that such situations will continue to occur..
A swarm of drones can organize to achieve a goal by interacting with each other, creating a new type of ultra-dangerous weapon. At the moment, the technology is still at the experimental stage, but the reality of the swarm, which can coordinate its behavior to achieve the most difficult military tasks, is approaching reality..
Spying on smart home devices
To devices «smart home» able to respond to inquiries and be as helpful as possible, they should be equipped with microphones to listen to voice commands. When a person installs a smart speaker in his room, he also signs that now a scrupulous little spy lives in his house, who does not miss a single word..
All smart devices collect information about habits and preferences, residence and travel routes, arrival and departure times. This information makes life more convenient, but there is also the possibility of data abuse. Thieves and crooks are actively working on tools that will allow them to take over all the information they collect. In the event of a successful hacking of personal data from cloud servers Amazon, Google, Yandex or any other artificial intelligence platform, the attackers will receive all the information they need to blackmail or steal real things from home..
One of the loudest scandals in this direction was the wiretapping by Amazon employees of the recordings of their customers’ conversations with the Amazon Echo smart speaker this spring. Obviously, Amazon employees should not use the information received for personal gain, but no one can guarantee this..
Face recognition thanks to video cameras
Smartphone maker Huawei was accused last year of using facial recognition technologies for surveillance and racial profiling, and handing over access keys to foreign networks to Chinese intelligence.
Millions of cameras on smartphones and laptops are used to track and recognize people not only in the PRC – this practice has been seen in almost every country in the world. The only difference is that somewhere it was possible to prove it (the Snowden case in the USA), and somewhere – not.
Artificial intelligence is capable of generating phrases with the voice of any person. For this, just one piece of audio recording with his voice is enough. Similarly, AI can create a video of anyone that looks natural and believable. The negative consequences of such videos and audio recordings are widespread, from hacking bank accounts to blackmail and political scandals..
Deepfake uses face mapping, machine learning, and artificial intelligence to create behaviors. Different forms of a person’s face act as the initial data for generating a fake, which are easier to obtain from a large array of records. Previously, this kind of open source data was only reserved for celebrities, but social media has changed that. Now ordinary people download gigabytes of amateur videos with them, providing the masters from Deepfake with all the necessary sources for processing.
Most successful in identifying fake videos copes with a program created at the University of Berkeley.
Viruses and phishing combined with AI
Artificial intelligence greatly simplifies the work of phishing networks, which need an effective tool to automate and scale the volume of work. AI helps attackers better find «weak» email addresses and accounts on social networks, more accurately compose the texts of viral fishing tackle letters, bypass anti-virus software more efficiently and, most importantly, collect money in an automated mode without the risk of being tracked by law enforcement agencies. In recent years, withdrawals of funds have become much more often carried out using cryptocurrencies..
Microelectromechanical systems (MEMS) are microscopic in size down to grit. The smallest piece of smart dust is called a moto. Mot – This is a sensor that has its own computing unit, sensors, power supply and data transmission systems. Motes can combine with other specks of dust to form what is called smart dust. Such systems are already being used on an industrial scale by cybercriminals when it is necessary to hide surveillance as much as possible..