It’s AI against hackers

Microsoft quickly notified its customers, and the attack was thwarted before the intruder could get far.

Chalk it up to a new generation of artificially intelligent software that adapts to hackers’ constantly evolving tactics. Microsoft, Alphabet Inc’s Google, Inc and various startups are moving away from using older “rules-based” technologies designed to respond to specific types of intrusions and deploying machine-learning algorithms that break down large numbers of information about access, behavior and previous attacks to stop and stop thieves.

“Machine learning is a very powerful technique for security – it’s dynamic, while rule-based systems are more complex,” said Dawn Song, a professor at the University of California, Berkeley Research Lab. Artificial Intelligence at Berkeley. the process of changing them, whereas machine learning is automatic, dynamic and you can easily retrain it.”

Hackers themselves are notoriously adaptable, of course, so they can also take advantage of machine learning to create new exploits and bypass new defenses. For example, they can find out how companies train their systems and use data to evade or subvert algorithms. Big cloud services companies are painfully aware that the enemy is a moving target but argue that new technology will help tip the balance in favor of the good guys.

“We will see an improved ability to identify threats early in the attack cycle, to reduce the overall amount of damage and quickly restore the system to a desired state,” said Amazon’s chief security officer Stephen Schmidt. He admitted that it was impossible to stop. All of the interventions but he said his industry will be “increasingly better at protecting systems and making it harder for attackers.”

Before machine learning, security teams used sophisticated tools. For example, if someone from the headquarters tries to log in from an unknown location, they will be banned from logging in. Or spam emails that contain different misspellings of the word “Viagra” are blocked. Such systems often work.

But they also flag many legitimate users — like anyone who’s blocked from using a credit card while on vacation. Microsoft’s system designed to protect customers from fraudulent logins had a 2.8% fraud rate, according to Azure CTO Mark Russinovich. That may not seem like much but it is considered unacceptable as Microsoft’s big customers can generate billions in sales.

Photo: Alamy

View Full Image

Photo: Alamy

In order to do a better job of identifying who is legitimate and who is not, Microsoft technology learns about the data of each company that uses it, customizing the security behavior and the typical history of the customer. Since launching the service, the company has managed to reduce its false positive rate to .001%. This is the system that left the man who entered Romania.

Training security algorithms is happening to people like Ram Shankar Siva Kumar, a Microsoft CEO who goes by the title Data Cowboy. Siva Kumar joined Microsoft six years ago from Carnegie Mellon because his sister was a fan of Grey’s Anatomy, the medical drama set in Seattle. He manages a team of about 18 engineers who develop machine learning algorithms and then make sure they are smart enough to resist hackers and work seamlessly with the software systems of the companies that pay the bills. major Microsoft cloud services. Siva Kumar is one of the people called when algorithms detect an attack. He was woken up in the middle of the night, only to find out that Microsoft’s “red team” of thieves were responsible.

The challenge is tough. Millions of people log into Google’s Gmail every day. “The amount of data we need to look at to make sure this is you or a fake person is growing at a much larger scale for humans to write the rules,” said Mark Risher, director of product management at help prevent. attacks on Google customers.

Google now checks for security breaches even when the user is logged in, which is useful for catching thieves who previously pretended to be real users. With machine learning able to analyze many different pieces of data, catching unauthorized access is no longer just a matter of yes or no. Rather, Google monitors various aspects of behavior during the user encounter. Someone who seems legitimate at first may later show signs that they are not who they say they are, allowing Google to release the software in enough time to prevent further damage.

Amazon’s Macie service uses machine learning to find sensitive data including company information from customers like Netflix and then watches who accesses it and when, alerting the company to suspicious activity.

In addition to using machine learning to secure their web and cloud services, Amazon and Microsoft are offering the technology to customers. Amazon’s GuardDuty monitors customer systems for malicious or unauthorized activity. Many times the service finds employees doing things they shouldn’t – like installing bitcoin mining software on work computers.

Dutch insurance company NN Group NV uses Microsoft’s Advanced Threat Protection to manage access to its 27,000 employees and close partners, while protecting everyone else. Earlier this year, Wilco Jansen, the company’s director of workplace services, showed employees a new feature in Microsoft Office Cloud software that blocks so-called CxO spamming, in which spammers pose as senior executives and command recipients. to transfer funds or share personal information. .

Ninety minutes into the protest, the security operations center called to report that someone had attempted the exact same attack on the head of NN Group. “We were like ‘oh, this feature could have already prevented this from happening,'” Jansen said. “We need to be constantly alert, and these tools help us see things that we can’t follow manually.”

Machine learning safety systems do not work in all situations, especially when there is not enough data to train on. Researchers and companies are constantly worried about being exploited by hackers.

For example, they can mimic users’ movements to thwart algorithms that look for common behavior. Or hackers can tamper with the data used to train the algorithms and fight for their own interests – so-called poisoning. That is why it is so important that companies keep the algorithmic requirements secret and change the formulas regularly, says Battista Biggio, a professor at the University of Cagliari’s Pattern Recognition and Applications Laboratory in Sardinia, Italy.

So far, these threats appear in more research papers than in real life. But that is likely to change. As Biggio wrote in a paper last year: “Security is an arms race, and machine learning security and pattern recognition systems are no exception.”

Follow all Business News, Market News, Breaking News Events and Latest News on Mint Live. Download Mint News App to get Daily Market Updates.

Less than that

Leave a Comment