Research Project Proposals

Cybersecurity & Reliable AI

Theme  1: Security of AI-enabled Systems of Systems

Reference: Luca Oneto - luca.oneto@unige.it
Funding: University of Genoa
Abstract: Over the last ten years, the scientific community proposed many techniques to prevent the execution of attacks against AI/ML, or at least to detect them. However, in most of the cases, these attacks and defenses have been designed to work in laboratory conditions under simplified or unrealistic assumptions that do not consider the requirements of large systems containing AI components (AI-enabled Systems of Systems). The research program of this PhD scholarship aims to delivers novel algorithmic solutions and practical software tools for security evaluation and protection of AI-based tools and AI-empowered systems. This research will contribute to the advancement of the state-of-the-art in these ways ways: the delivered algorithmic solutions and practical software tools will take explicitly into account the requirements of selected cybersecurity applications, overcoming the unrealistic assumptions of the majority of the solutions proposed so far (e.g., the practical feasibility of the attacks will be taken explicitly into account); challenging and novel application domains will be considered, such as the cyber-physical security of computer vision for driver assistance systems.

Theme  2: Cybersecurity

Reference: Alessandro Armando  - alessandro.armando@unige.it
Funding: University of Genoa
Abstract: A wide range of technical and/or methodological research challenges in a number of key areas of Cybersecurity (access and usage control, security of virtualization technologies, dual-use of Cybersecurity techniques and tools, ...). For more information please drop an email to alessandro.armando@unige.it

Theme  3: Design of AI-Enabled Systems

Reference: Luca Oneto - luca.oneto@unige.it
Funding: University of Genoa
Abstract: Over the last ten years, the scientific community proposed many techniques to prevent the execution of attacks against AI/ML, or at least to detect them. owever, in most of the cases, these attacks and defenses have been designed to work in laboratory conditions under simplified or unrealistic assumptions that do not consider the design of large systems containing AI components (Design of AI-Enabled Systems).  The research program of this PhD scholarship  will go beyond the state of the art that considered the security of “isolated” machine learning algorithms by analyzing the design and the security of larger, AI-empowered, cyber-physical systems made up of AI-based and non-AI-based components (e.g., malware detection architectures using black listing, machine-learning static analysis, etc.).

Theme  4: Science and Engineering of AI Security

Reference: Luca Oneto - luca.oneto@unige.it
Funding:  DIBRIS on the Partenariato Esteso SERICS SOS-AI project
Abstract: Microsoft reported a dramatic increase of attacks on commercial systems based on Artificial Intelligence (AI) and machine learning (ML) algorithms over the past years. Notably, Microsoft pointed out that companies usually lack the knowledge and tools to secure their ML-powered systems. Over the last ten years, the scientific community proposed many techniques to prevent the execution of attacks against AI/ML, or at least to detect them. However, in most of the cases, these attacks and defenses have been designed to work in laboratory conditions under simplified or unrealistic assumptions that do not consider the requirements of cybersecurity applications (e.g., the practical feasibility of the attacks is often not considered). The theoretical foundations of machine learning have not been originally thought considering intelligent and adaptive attackers who can manipulate input data to purposely subvert the learning process, that is exactly the case of cybersecurity. The research program of this PhD scholarship  wants to critically revise the foundations of machine learning focusing on open research questions that arise from practical requirements of cybersecurity applications and require novel, fundamental, understanding of machine learning theory.

Theme  5: Automated security, privacy, and risk management of digital identity solutions

Reference:  Silvio Ranise - ranise@fbk.eu, Roberto Carbone - carbone@fbk.eu, Giada Sciarretta -  giada.sciarretta@fbk.eu 
Funding:  Bruno  Kessler Foundation
Abstract: Nowadays, digital identities are employed by the majority of European governments and private enterprises to provide a wide range of services, from secure access to social networks to online banking. As the Digital 2023 global overview report shows, the number of digital identities is growing: we have 4.76 billion social media users and spend trillions of dollars on e-commerce. Digital identity is therefore a key ingredient for securing new IT systems and digital infrastructures such as those based on zero trust. For these reasons, the secure deployment of digital identity solutions is a mandatory prerequisite for building trust in digital ecosystems and is an obligation shared by security practitioners and consumers. The research work to be conducted in the thesis aims to develop a new approach for automated security, privacy, and risk management in the design, development, and maintenance of digital identity solutions. The challenge is to deal with the multiple dimensions of the design space as a continuum in which specifications are analyzed both in isolation and as refinements of each other. The approach should take into account the specific security and privacy issues of each phase and, at the same time, consider the interdependencies among the design and implementation choices performed in the various phases, bridging the gap among them. The resulting approach should be automated, auditable, provide actionable hints to reduce risk, and be easy to integrate into the wide range of services and applications that arise in the plethora of use case scenarios resulting from the pressure of digital transformation. This activity includes:
- Analysis of state-of-the-art identity management solutions and their security issues.
- Identification of relevant use cases.
- Specification of a (semi-)automatic approach for security and risk management of digital identity solutions.
- Implementation of the approach on a tool and experimental evaluation on real-world use cases.

Theme  6: Formal methods for industry

Reference: Marco  Bozzano - bozzano@fbk.eu
Funding:  Bruno  Kessler Foundation
Abstract: Industrial systems are reaching an unprecedented degree of complexity. The process of designing a complex system is expensive, time consuming and error-prone. Moreover, the design process has to guarantee not only the functional correctness of the implemented system, but also its dependability and resilience with respect to run-time faults. Hence, the design process must characterize the likelihood of faults, mitigate possible failures, and assess the effectiveness of the adopted mitigation measures. Formal methods have been increasingly used over the last decades to deal with the shortcomings of designing a complex system. Formal methods are based on the adoption of a formal, mathematical model of the system, shared between all actors involved in the system design, and on a tool-supported methodology to aid all the steps of the design, from the definition of the architecture down to the final implementation in HW and SW. Formal methods include technologies such as model checking, an automatic technique to symbolically and exhaustively analyze all possible executions of the system in the formal model, in order to detect design flaws as early as possible. Model checking techniques have been recently extended to assess the safety and dependability characteristics of the design, and for system certification. The objective of this study is to advance the state-of-the-art in system design using formal methods. This includes adapting and extending the system design methodology, investigating improved versions of state-of-the-art routines for verification and safety assessment of complex systems, and developing novel extensions to address open problems. Examples of such extensions include novel techniques for contract-based design and contract-based safety assessment, advanced techniques for formal verification based on compositional reasoning, the analysis of the timing aspects of fault propagation, the characterization of transient and sporadic faults, the analysis of the effectiveness of fault mitigation measures in presence of complex fault patterns, and the modeling of analysis of systems with continuous and hybrid dynamics. This study will exploit the challenges and benchmarks defined in various industrial projects carried out at FBK.

Theme  7: Identification and simulation of the effect of defensive cyber course of actions during a cyber attack

Reference:
Funding: Leonardo
Abstract: 

Theme  8: Cloud Security – Confidential Computing

Reference: Alessandro Armando, Matteo Dell'Amico - alessandro.armando@unige.it, matteo.dellamico@unige.it
Funding: Leonardo
Abstract: Cloud computing is a paradigm in which clients delegate computational tasks to external entities (cloud providers). Often, these computations are based on private data that owners are not willing to disclose to the cloud provider. Confidential computing is a technique that leverages on a trusted computing base: a set of hardware and software that is considered protected from prying eyes, for example thanks to trusted execution environments (TEEs) based on pieces of hardware which hide the data being processed from the hardware owners, and allows access to it only to certified pieces of software. Alternatives to this approach may be based on cryptographic approaches, such as homomorphic encryption or secure multi-party computation, or even via decentralized system designs that simply distribute the information among a number of peers that is large enough to disperse the information so that each party only has a very limited view of the overall system. This project involves studying the design and reliability of such systems, attacks they can be subject to and countermeasures or mitigations that can be used as protections from such attacks.

Theme  9: Trustworthy AI for Industrial Applications (2 Grants)

Reference:  Fabio Roli - fabio.roli@unige.it
Funding: Rina
Abstract: Trustworthy AI refers to the development and deployment of artificial intelligence (AI) systems that are reliable, ethical, and accountable. It emphasizes the importance of ensuring that AI systems are transparent, fair, secure, and respectful of privacy, while also considering their impact on society and human well-being. Trustworthy AI aims to address concerns related to bias, discrimination, lack of transparency, and unintended consequences that can arise from the use of AI technologies. This Phd scholarship will focus on Trustworthy AI for Industrial Applications. This research program on Trustworthy AI for Industrial Applications aims to address the challenges and opportunities of deploying AI systems in industrial settings while prioritizing reliability, ethics, and accountability. The program focuses on developing innovative solutions and best practices to ensure that AI technologies used in industries are transparent, fair, secure, and respectful of privacy, while also considering their societal and environmental impact.

Theme  10: Robust Artificial Intelligence for Safety Critical Application

Reference: Luca Oneto luca.oneto@unige.it
Funding:  D.M. 118 University of Genoa
Abstract: The research project of this PhD scholarship aims to develop reliable and trustworthy AI systems in domains where human safety is fundamental. As AI is increasingly being integrated into safety-critical applications such as autonomous vehicles, medical diagnostics, and industrial control systems, ensuring their robustness and dependability becomes crucial. This PhD project focuses on two main objectives: (1)Robustness Enhancement: advanced techniques to enhance the robustness of AI systems against adversarial attacks, data perturbations, and unforeseen scenarios. By developing innovative algorithms and methodologies, the project aims to minimize the vulnerabilities of AI models and improve their ability to handle uncertainties and abnormal conditions. (2) Verification and Validation: To ensure the safety and reliability of AI systems, the project emphasizes the development of rigorous verification and validation frameworks. The outcomes of this PhD project have the potential to significantly impact industries reliant on safety-critical AI, including autonomous transportation, healthcare, aerospace, and manufacturing.

Theme  11: Trustworthy Machine Learning for industry 4.0

Reference: Luca Oneto luca.oneto@unige.it
Funding: aizoOn
Abstract: The purpose of the project is to enhance the candidate's skills in areas such as Data Analytics, Machine Learning, and Trustworthy AI applied to Industry 4.0, in various applications such as predictive maintenance and automation of industrial processes. The aim is to understand, improve, and adapt the most recent findings from basic research in these rapidly evolving fields, in order to reliably apply them to new-generation industrial contexts. The increasingly advanced progress in the field of AI, Machine Learning, and Data Analytics is profoundly and irreversibly transforming the industrial system at an unprecedented speed. The disruptive impact of this transformation in all sectors - both industrial and beyond - is due to AI's ability to fuel a radical change in digital and physical systems, making them increasingly interconnected and capable of interacting and collaborating intelligently. The term Industry 4.0 denotes this trend toward industrial automation that integrates new productive technologies to improve working conditions and increase productivity and the quality of plants. This progressive and radical adoption of AI in the industrial field inevitably brings with it a growing interest in the concept of reliability. In general, the definition of AI reliability can be traced back to the concept of evaluating the result in terms of protection of the individual user and the context in which they operate, as well as in terms of usefulness and precision. The term indeed encompasses both the meaning of correctness and reliability of the results produced by AI and the consideration of the ethical-social aspect, which in turn includes several themes, such as, to name just a few, the safety and privacy of the data usage process, the interpretability of the algorithms used, and the fairness of the approach. Each of these themes will be explored by the student, who will need to acquire the skills necessary to master them individually, recognize and assess their interdependencies, as well as to develop an ability to independently detect the potential issues that may arise from the use of AI and Data Analytics. The development of the project will therefore focus on finding ways to integrate innovative theoretical models and applicative approaches based on the concept of reliability. In this way, the project will allow the candidate to combine high-level scientific training with the acquisition of skills relevant to the industrial contexts of reference.

Theme  12: AutoEncoder applications in network security: dimensionality reduction, anomaly detection, and explainability

Reference: Luca Oneto luca.oneto@unige.it
Funding: aizoOn
Abstract: The primary aim of this project is to delve into how AutoEncoders (AE) can be employed in network security applications, both with regards to dimensionality reduction and to anomaly detection. Additionally, we wish to explore how some AE architectures provide important tools to achieve eXplainable Artificial Intelligence (XAI) on problems where interpretability is scarce. Dimensionality reduction methods have been employed extensively in order to improve both regression and classification accuracy, and various AE architectures have been employed for this task in recent years, showing better results than traditional methods that do not leverage Deep Learning (DL). This method shows promising results and its application in network security applications should be further studied to explore new areas in which it could be employed. Another area in which AE have seen a surge of applications is anomaly detection: this project will study how the flexibility of AE can be leveraged in unsupervised DL to detect anomalies in network traffic, which is especially important aspect when attempting to detect zero-day attacks. Lastly, a crucial aspect of network security is represented by the trustworthiness of the detection system, and XAI is one of the best tools at our disposal to improve this element. This work will analyze how AE can be used to produce explainable outputs, and how XAI can be used to provide more robust and accurate predictions.

Theme  13: HPC for Safe and secure AI

Reference: Luca Oneto luca.oneto@unige.it
Funding: Leonardo and Regione Liguria
Abstract: the goal of this PhD is to increase and consolidate the skills in "HPCN for Safe and Secure AI", through a path focused on the study of these highly innovative approaches and the various techniques that can be used. The objectives are
- apply in-depth techniques of "HPCN for Safe and Secure AI" during study periods to real-life contexts
- technology transfer in the territory
The implementation method involves participation in projects with the company for key clients, field analysis, and working with the various teams involved.

 Theme  14: Artificial Intelligence Applied for the Last-Mile Multimodal Transport Systems

Reference: Luca Oneto luca.oneto@unige.it
Funding: Progetto Adele and Regione Liguria
Abstract: the goal of this PhD is to increase and consolidate skills in Artificial Intelligence applied to the logistics-transportation context, through a path focused on the study of these highly innovative approaches and various techniques that can be used to provide decision-making tools for operators in the logistics chain. The objectives are
- Apply advanced techniques of Artificial Intelligence during study periods to real contexts
- Technological transfer in the area
The method of implementation involves participation in projects with the company for reference clients, field analysis, working with the various teams involved.