Research Project Proposals

Cybersecurity & Reliable AI

Theme 1: AI Safety: Design and Verification

Reference: Armando Tacchella <armando.tachella@unige.it> - Luca Oneto <luca.oneto@unige.it>
Abstract: Data-driven inductive techniques in AI are being widely adopted in several applications, including safety- and security-related ones. Given enough data to train with, their promise is to deliver cost-effective solutions to problems that were considered out of reach for traditional techniques, including deductive-based AI. However, most successful inductive models, e.g., deep neural networks, can show problematic behaviors even if their training process is conducted with utmost care. Given their black-box nature, it is also difficult to identify and remove the causes of such issues to ensure safety and/or security. Automated formal verification of neural networks aims to solve this problem by identifying errors and possibly suggesting fixes in a purely algorithmic fashion. The main problem with this approach is that the computational complexity of the analysis is high: for instance, for deep neural networks it is at least NP-hard, and even undecidable in most cases. To facilitate the work of algorithmic verifiers, a synergy between learning and verification should be sought. The purpose of this research is to focus on deep neural networks and identify the best way to train them, in order to be able to verify their compliance to stated requirements in order to enable certification of data-driven models in safety- and security-related contexts.

Theme 2: Trustworthy AI

Reference: Luca Oneto <luca.oneto@unige.it>
Abstract: The aim of this research is to investigates the development of trustworthy Artificial Intelligence (AI) systems, with a focus on creating mechanisms that ensure robustness, transparency, fairness, privacy, and accountability. It should propose a novel framework that integrates state of the art techniques with comprehensive governance policies to build reliable and ethical AI applications. The research assesses the effectiveness of this framework through case studies in high-stakes domains, measuring outcomes through system performance metrics. The ultimate goal is to provide a scalable model for developers and policymakers to implement AI systems that are both technically robust and ethically sound, thereby fostering greater public trust in AI technologies.

Theme 3: Robust Artificial Intelligence with Multi-Modal Large Language Models

Reference: Fabio Roli <fabio.roli@unige.it> 
Abstract: AI-based systems are increasingly deployed in critical applications (e.g., healthcare, transportation, and finance) where robustness and resilience against unexpected or changing conditions are crucial. Despite their potential, these systems have demonstrated vulnerabilities to meticulously crafted attacks, including adversarial examples, data poisoning, and out-of-distribution (OOD) scenarios, which can severely compromise their functionality. This PhD research project seeks to address these challenges by exploring the potential of multi-modal large language models (LLMs) like CLIP and GPT-4 to enhance the robustness of deep neural networks (DNNs). These models, trained to align textual and visual representations, enforce the learning of semantic relationships within the data, potentially making them more resistant to existing attacks. This research will investigate how multi-modal models can synergize with DNNs to detect and abstain from uncertain classifications, thereby mitigating risks posed by adversarial examples. It will also explore techniques for anomaly detection and handling OOD scenarios, ensuring AI systems can reliably classify novel inputs. Additionally, this PhD project will analyze the adaptability of attackers and defenses, focusing on creating resilient AI systems that can evolve in response to new attack strategies and natural data shifts. By integrating these advanced LLMs with DNNs, this research aims to develop methodologies for improved semantic understanding and adaptive learning mechanisms to strengthen and support AI system defenses. 

Theme 4: Security and Safety of AI in Medicine

Reference: Fabio Roli <fabio.roli@unige.it>  
Abstract: AI has the potential to transform healthcare systems and revolutionize the field of medicine, offering unprecedented advancements in diagnosis, treatment, and patient care. However, when deployed in real-world scenarios, AI systems can misclassify (with high confidence) novel inputs that significantly differ from their training data, leading to serious safety concerns. These systems exhibit limited robustness to the novel situations frequently encountered in medical applications. Moreover, AI systems in medicine must often contend with intelligent and adaptive malicious users who can deliberately manipulate data to subvert the learning process. Traditional machine learning algorithms, not originally designed to counter such threats, have demonstrated vulnerabilities to well-crafted attacks, including test-time evasion (adversarial examples) and training-time poisoning. These limitations are often compounded by the fact that these models operate as black-box systems, making it difficult to understand their inner workings, identify limitations, effectively debug, and find malfunctions. This PhD research project seeks to contribute to the development of AI systems that are not only highly accurate but also secure, reliable, and transparent for medical use. The study will explore methods to enhance the robustness of AI models against novel and adversarial inputs, improving their reliability in medical applications. It will investigate strategies for detecting and mitigating the risks posed by adversarial examples and data poisoning attacks. Additionally, the research will focus on developing adaptive learning mechanisms that enable AI systems to better handle novel and unexpected scenarios, ensuring their safe and effective operation in diverse medical environments.

Theme 5: SecLLMOps: Robustness Development and Verification in Large Language Models

Reference: Fabio Roli <fabio.roli@unige.it>  
Abstract: Generative AI, particularly large language models (LLMs), have recently gained considerable attention and popularity due to notable advancements and extensive media coverage, primarily driven by the success of commercial products. These models are extensively trained on large-scale corpora datasets, which have unveiled their unique and remarkable capabilities in processing and generating diverse media content, often exhibiting human-like capabilities. Thanks to their impressive results, they are now being integrated into many industrial pipelines, e.g., automated customer service systems, content creation tools, and advanced data analysis platforms. However, LLMs are vulnerable to malicious prompt injections, which can result in data leaks or system misuse for unintended purposes. Even recent models like ChatGPT, GPT-4, or Llama 2 remain susceptible and struggle to prioritize initial developer and company guideline prompts consistently. To date, all tested countermeasures or prompt hacking detection methods have proven unsuccessful without significantly diminishing the utility of the models. This PhD research project focuses on developing attacks to rigorously test the vulnerability of LLMs and create defenses to safeguard them. The project will explore various attack vectors, including adversarial examples and poisoning attacks, while also evaluating defense mechanisms and standard security principles that can enhance the resilience of LLMs. The goal is to establish a robust SecLLMOps development pipeline that securely trains LLMs, identifies weaknesses, and fortifies them against potential threats, ensuring their safe and reliable deployment in real-world scenarios.

Theme 6: Formal Methods for Industry

Reference: Bozzano Marco <bozzano@fbk.eu>
Funding: Fondazione Bruno Kessler
Abstract: Industrial systems are reaching an unprecedented degree of complexity. The process of designing a complex system is expensive, time consuming and error-prone. Moreover, the design process has to guarantee not only the functional correctness of the implemented system, but also its dependability and resilience with respect to run-time faults. Hence, the design process must characterize the likelihood of faults, mitigate possible failures, and assess the effectiveness of the adopted mitigation measures. Formal methods have been increasingly used over the last decades to deal with the shortcomings of designing a complex system. Formal methods are based on the adoption of a formal, mathematical model of the system, shared between all actors involved in the system design, and on a tool-supported methodology to aid all the steps of the design, from the definition of the architecture down to the final implementation in HW and SW. Formal methods include technologies such as model checking, an automatic technique to symbolically and exhaustively analyze all possible executions of the system in the formal model, in order to detect design flaws as early as possible. Model checking techniques have been recently extended to assess the safety and dependability characteristics of the design, and for system certification. The objective of this study is to advance the state-of-the-art in system design using formal methods. This includes adapting and extending the system design methodology, investigating improved versions of state-of-the-art routines for verification and safety assessment of complex systems, and developing novel extensions to address open problems. Examples of such extensions include novel techniques for contract-based design and contract-based safety assessment, advanced techniques for formal verification based on compositional reasoning, the analysis of the timing aspects of fault propagation, the characterization of transient and sporadic faults, the analysis of the effectiveness of fault mitigation measures in presence of complex fault patterns, and the modeling of analysis of systems with continuous and hybrid dynamics. This study will exploit the challenges and benchmarks defined in various industrial projects carried out at FBK.

Theme 7: Assisted security assessment of cryptographic protocols for digital identity solutions

Reference: Carbone Roberto <carbone@fbk.eu> - Ranise Silvio <ranise@fbk.eu>
Funding: Fondazione Bruno Kessler
Abstract: Nowadays, digital identities are employed by the majority of European governments and private enterprises to provide a wide range of services, from secure access to social networks to online banking. As the Digital 2023 global overview report shows, the number of digital identities is growing: we have 4.76 billion social media users and spend trillions of dollars on e-commerce.  Digital identity is therefore a key ingredient for securing new IT systems and digital infrastructures such as those based on zero trust. Cryptographic protocols (e.g., OAuth/OpenID Connect) are the key enabler for digital identity solutions. For these reasons, the secure design and deployment of cryptographic protocols for digital identity solutions is a mandatory prerequisite for building trust in digital ecosystems and is an obligation shared by security practitioners and consumers. Several tools for the (formal) analysis of cryptographic protocols exist in the literature.  However they require a lot of expertise to be properly used. They must be customized according to the scenario considered and their usage is thus time consuming and error prone. The research work to be conducted in the thesis aims to develop a novel approach for the assisted security assessment of cryptographic protocols for digital identity solutions. The challenge is to deal with the complexity of the modern cryptographic protocols and application scenarios (e.g., e-voting and digital wallet), by eliciting the relevant requirements, the expected security properties and attacker capabilities, and by providing methodologies to (easily) specify them. (possibly taking into account both the computational and symbolic models). The resulting approach should be able to guide users during the design and security assessment of the cryptographic protocols, by providing actionable hints to specify the protocols, contributing to properly analyse and secure them.

This activity includes:

References:

Theme 8: Reliable AI for Digital Identity Management

Reference: Pasquini Cecilia <pasquini@fbk.eu> - Ranise Silvio <ranise@fbk.eu>  
Abstract: Artificial Intelligence (AI) is playing an increasingly relevant role in the field of Digital Identity Management, offering opportunities to enhance the efficiency and user experience in different phases of the digital identity lifecycle, including identity proofing and authentication. A prominent example is the partial or full automatization of video-based identity proofing through the real-time verification of the presented identity evidence and the physical appearance of the applicant, which allows identity providers to significantly streamline remote onboarding processes [A]. Moreover, conventional authentication flows can also be strengthened by the analysis of biometric traits and behavioral patterns, especially when deployed in IoT environments involving diverse devices and hardware capabilities [B].  For instance, the use of IoT devices (e.g., VR headsets) designed to provide immersive and Metaverse-based experiences paves the way to new biometric and behavioral procedures for identity management enabling innovative flows of data and credentials from the physical to the digital world and vice versa. This raises new challenges for guaranteeing their reliability, given their peculiar attack surface.  In fact, while providing additional security layers, decision rules based on learned models may suffer from vulnerabilities due to their statistical nature, such as sensitivity to spoofing attempts (well-known in the biometric field and now further empowered by generative AI and advanced injection techniques [C]) or adversarial attacks performed before or after their deployment. The increasing complexity of identity management systems and their attack landscape thus calls for advanced security countermeasures and principled design approaches for preventing frauds such as identity thefts and unauthorized access.  In this realm, this PhD proposal concerns the development of theoretical and practical tools for the risk assessment of innovative digital identity systems integrating AI-based components. The research will investigate advanced attack vectors, their impact on the overall security of the attacked system, and the feasibility of mitigation strategies to be deployed in practical systems.

[A] ENISA Report on “Remote ID Proofing - Good practices”, March 2023
[B] Liang, Y., Samtani, S., Guo, B., & Yu, Z., “Behavioral biometrics for continuous authentication in the internet-of-things era: An artificial intelligence perspective”, IEEE Internet of Things Journal, 2020.
[C] C. Li, L. Wang, S. Ji, X. Zhang, Z. Xi., S. Guo and T. Wang, “Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era”, USENIX Security 2022.

Theme 9: AI for Industry 4.0

Reference: Luca Oneto <luca.oneto@unige.it>
Funding: aizOon Company
Abstract: The purpose of the project is to enhance the candidate's skills in topics such as Data Analytics, Machine Learning, and Generative AI as applied to Industry 4.0, across various applications such as predictive maintenance and automation of industrial processes. The goal is to understand, improve, and adapt the most recent results from basic research in these rapidly evolving fields, in order to make them reliably applicable in new-generation industrial contexts.