Google Leads Cybersecurity and AI Advancements in Málaga

At an event that marks a milestone in the convergence of artificial intelligence and cybersecurity, the Google Security Engineering Center in Malaga (GSEC Malaga) has been the setting for the ‘GSEC Cybersecurity Summit: Cyberdefense in the AI era.’ This meeting, held prior to the NATO Cyber Commanders Forum, brought together industry and government experts to explore innovative solutions in the field of cyber defense and discuss how cybersecurity can be transformed in this new era dominated by artificial intelligence.

VirusTotal revolutionizes malware analysis with AI

One of the highlights of the event was the presentation of research conducted by VirusTotal, a malware analysis platform owned by Google. This study reveals how Large Language Models (LLM) are transforming malware analysis, offering promising advances in this crucial field of cybersecurity.

LLMs have demonstrated their ability to streamline the analysis of large volumes of malware binary code, significantly reducing the need for manual work. Although there are still challenges, especially due to the obfuscation techniques used by malware developers, these models simplify the initial analysis process, allowing human analysts to work more efficiently.

Key Factors in LLM Effectiveness

The research has identified several factors that influence the effectiveness of LLMs for malware analysis:

  1. Precise prompts: It is crucial to craft prompts that guide LLMs to obtain precise and thorough analysis. The quality of prompts largely determines the utility of the results obtained.
  2. Additional context: Accuracy improves when supplementary information is provided, such as code snippets, execution results in sandbox environments, and threat intelligence. This context enriches the analysis and allows LLMs to offer deeper and more relevant insights.
  3. Obfuscation techniques: Strongly obfuscated code can pose challenges for LLMs. Malware developers use these techniques to hinder analysis, presenting a continuous challenge for AI systems.
  4. Code type: LLMs yield better results with decompiled code, although disassembled code can provide valuable information in cases of advanced obfuscation. This distinction is crucial for optimizing the analysis process and obtaining as much information as possible.

University of Malaga pioneers cyber education

In an announcement that reinforces Google’s commitment to cybersecurity education, the University of Malaga has become the first institution selected to participate in the Cybersecurity Seminars program. This initiative, supported by Google.org and the European Cyber Conflict Research Incubator (ECCRI), aims to address the shortage of professionals in the cybersecurity field in Europe.

Program Details

  • The University of Malaga will receive a grant of $1 million.
  • Spanish-language teaching materials and intensive training will be provided by ECCRI.
  • The goal is to train at least 200 students.
  • Support will be provided to around 250 local organizations.

This program is part of a broader initiative launched last year by Google.org in collaboration with ECCRI, with a total grant of $15 million. The purpose is to enhance the development and learning of advanced cybersecurity skills, including the role of AI in the threat landscape, and to support local community organizations in implementing their own online protections.

Global Initiatives in cybersecurity and AI

The event also served as a platform to highlight other relevant initiatives in the cybersecurity and AI field:

Secure AI Framework (SAIF)

Launched in June 2023, this framework seeks to establish principles for the secure deployment of AI in organizations of all sizes. SAIF draws inspiration from the security best practices that Google has applied to software development, while incorporating its understanding of security megatrends and the specific risks of AI systems.

Coalition for Secure AI (CoSAI)

Co-founded by Google in July 2024, this coalition brings together tech giants to address unique risks associated with AI. Founding members include Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, Paypal, and Wiz. The coalition is hosted at OASIS Open, the international standards and open-source consortium.

Google’s AI Cyberdefense Initiative

Announced earlier this year, this initiative lays out a plan to invest in collaboration, research, and innovation in AI-empowered cybersecurity. Google presented a report titled "Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma" which offers a policy and research agenda with three key recommendations:

  1. Secure AI from the start, prioritizing secure design practices.
  2. Empower defenders over attackers, promoting a balanced regulatory approach.
  3. Foster research collaboration to generate scientific advances in security and software development.

The Defender’s Advantage

On September 18, 2024, Mandiant (part of Google Cloud) published the second edition of the e-book "The Defender’s Advantage." This study details how threat intelligence drives critical cybersecurity functions: Detect, Respond, Validate, Hunt, and Control the Mission. Additionally, a supplement on AI has been released that describes how generative AI can be used to enhance cybersecurity, increasing effectiveness, reducing manual work, and improving security team capabilities.

The ‘GSEC Cybersecurity Summit’ event has demonstrated the crucial role that Malaga is playing in advancing global cybersecurity, establishing itself as a center of innovation and education in this critical area for the digital future. The collaboration between academia, industry, and government promises to accelerate the development of AI-based security solutions, paving the way for a safer and more resilient internet in the face of constantly evolving cyber threats.

via: Andalucía Informa

Scroll to Top