Cybersecurity is continually evolving as security experts, and their adversaries develop and modify new techniques to their trade. Researchers, businesses, and professionals from the security community considered emerging trends in the field through the 8th semiannual UC Cyber Security Summit, hosted at UC Santa Barbara.
Speakers narrated topics such as creating, weaponizing, and detecting deep tricks, an issue of utmost importance, especially as the official race kicks into gear. They also addressed threats to essential infrastructure as well as phishing, the center of UC Santa Barbara’s Cyber Security Awareness Month operations this year.
Giovanni Vigna, a teacher of computer science and director of the campus’ Center for CyberSecurity and co-director of the Security Lab, addressed the keynote address, discussing how artificial intelligence is transforming the security landscape. According to Vigna, identifying threats and managing network security is demanding and time-consuming. People are looking to AI to automate and organize their efforts while also expanding their range.
AI can efficiently classify computer activity and even learn to organize it, streamlining this monotonous work, he explained. It makes anomaly detection available at a scale, scope, and efficiency unthinkable when the concept was developed in the 1980s. It can even allow security authorities to go on the offensive: proactively accumulating intel and developing tools to combat attacks their network hasn’t encountered yet.
But AI is not a clear-cut answer to our security challenges. Malicious actors are well recognized for running insidious false-flag attacks that can involve AI. “What’s more, the adversary can contaminate the dataset from which you’re learning,” Vigna said, like setting out decoys. As an analogy, he mentioned the antics of those who designate inanimate objects like people in pictures they post on Facebook. This compromises the data the company’s software utilizes to learn.
Combining AI into cybersecurity doesn’t change the fact that security workers still require to keep their methods safe from attackers. Machine learning addresses this even more crucial and more difficult. Researchers have reported a flaw that allows an individual to keep what a system has learned. It’s a procedure for reverse engineering, and AI algorithm based on the answers it presents. “If I can question you as an oracle about what you think about my data and what you have studied, then I can steal anything you ever learned,” stated Vigna.
“In my lab, we’re concentrating on machine learning in many different ways, and not only to expose stuff,” he added. “For example, we use machine learning to distinguish which parts of the code are more prone to restrain vulnerabilities.” The use of AI will not end the difficulty of cybersecurity. AI is just another tool, and its consequences will be only as good or as bad as the intentions of the individuals operating it. “The bad guys are utilizing artificial intelligence, too,” Vigna showed out.
He also suggested that the security community tends to assume that unusual activity communicates with malicious activity. But this isn’t essentially the case. Some attacks look innocuous, while others, the regular maintenance, look suspicious. Connecting AI anomaly detection with systems set up by humans to differentiate good and bad actions appears to offer the best of both procedures. Humans are still a vital part of cybersecurity. “You can promote the level of discourse using artificial intelligence, but certain choices still require humans to interpret and build a context to fully address the threat,” Vigna replied. We’re the ones who make the distinction judgments about what our software wins.