Research Projects

Workflow of the  paper

We researched Machine Learning and its application in discerning binary user sentiments (0=negative, 1=positive) toward restaurant reviews. This research venture exposed us to various algorithms, including Naïve Bayes, Support Vector Machine, and K nearest Neighbor classifier, widely employed in supervised learning within the Machine Learning domain. By analyzing online reviews, we discovered that these algorithms could effectively determine user decisions based on accurate positive and negative feedback. The outcomes of our research were presented at the esteemed 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC) and subsequently published in the IEEE digital library. This notable achievement showcased the practicality and relevance of our work, indicating its potential integration into various domains beyond restaurant reviews, such as online shopping experiences, technology adoption, and evaluation of online services.

CGMs Device Workflow

Continuous Glucose Monitoring Systems (CGMs) device is the most developed technology, which has reshaped manual diabetes management with smart features having sensors, transmitter, and monitor. However, the number of users for CGMs devices is still very low compared to existing manual systems, although this device provides a smart landmark in blood glucose monitoring for diabetes management. Consequently, the assessment aspires to explore the factors influencing users’ intention to adopt CGMs devices in the Internet of Things (IoT) based healthcare. We provided an adoption model for CGMs devices by integrating some factors from theories in existing studies of wearable healthcare devices. The proposed adoption model also examines current factors as a guideline for the users to adopt the CGMs device. We have collected data from 97 actual CGMs device users. Partial least square and structural equation modeling were involved for this study's measurement and structural model assessment.

Metaverse Architecture

The Metaverse is the future of the Internet-connected world of social networks. It uses emerging technologies such as extended reality (XR), virtual reality (VR), augmented reality (AR), and meta-quest headsets. As the popularity of the Metaverse grows, cyber-criminals are increasingly targeting it and its users. Unfortunately, security and privacy are some of the most challenging areas of the Metaverse, yet more focus must be given to making it trustworthy. To achieve this goal, it is important to perform a thorough and systematic security analysis that would allow the adoption of proper mitigation strategies. A threat model-based analysis can help in such an analysis. We presented a threat model-based systematic security analysis of the Metaverse. We used the widely accepted STRIDE  model to identify the threats of the Metaverse along with corresponding mitigation strategies. Our threat model can help the users, researchers, and developers build a trustworthy Metaverse.

Architecture of Health Information Systems

Health Information Systems (HISs) are critical in modern healthcare by facilitating patient health information storage, management, and exchange. However, the increasing reliance on technology and the rise in cyber threats have heightened the importance of ensuring the security of these systems. This paper presents a comprehensive approach to improving security practices in HISs by applying the STRIDE threat modeling framework. The STRIDE framework offers a systematic methodology for identifying and analyzing potential vulnerabilities, covering threats such as Spoofing, Tampering, Repudiation, Information disclosure, Denial of Service, and Elevation of Privilege. By applying the STRIDE approach, healthcare organizations can proactively identify threats, evaluate their potential impact, and implement appropriate security controls to mitigate risks. Our threat model highlights the significance of a robust security posture in safeguarding patient privacy, maintaining data integrity, and ensuring the confidentiality of sensitive health information.

Overview of Ambient Intelligence

Ambient intelligence (AmI) systems, which create smart and adaptive environments, offer numerous benefits but raise significant security concerns. in this paper, we presented a security analysis for ambient intelligence using the Stride threat modeling approach. STRIDE, an acronym for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege, provides a systematic framework for identifying and mitigating threats in AmI environments. We focused on identifying and analyzing security risks specific to ambient intelligence systems, considering the unique characteristics and challenges they pose. By applying the STRIDE threat modeling approach, we uncovered vulnerabilities. We provided recommendations to enhance the security posture of various AmI components, such as sensors, data processing, communication channels, and user interactions. Furthermore, our research findings contribute to a better understanding of security issues in ambient intelligence and provide valuable insights for developers, system architects, and policymakers to design and deploy secure AmI systems.

Cyberthreats in Smart Cities

Smart Cities: Cybersecurity Concerns (In Process)

Smart cities employ cutting-edge technology to enhance the living standards of urban dwellers. In this book chapter, we explored the relationship between cybersecurity and smart cities. We also investigated the various constituents of smart cities, such as Artificial Intelligence, big data analytics, and the Internet of Things. These elements can optimize traffic management, resource allocation, and data processing. However, smart cities face cyber threats like malware, data breaches, and social engineering. We examined smart cities' unique challenges and vulnerabilities, such as data privacy, IoT devices, limited resources, and critical infrastructure. A cyber attack on smart city infrastructure can have dire consequences for the economy and society. We highlighted proactive measures that stakeholders can adopt to ensure cybersecurity. These include but are not limited to developing frameworks and partnerships, sharing threat intelligence, and creating incident response plans. We will showcase successful case studies and best practices from previous cyberattacks. Additionally, we explored the future of smart cities and cybersecurity, which encompasses emerging technologies, regulatory frameworks, and ethical concerns.

Workflow of AI-based Coding Tools

The widespread adoption of Artificial Intelligence (AI)-based coding tools such as ChatGPT, Copilot, Open AI Codex, and Tabnine necessitates a comprehensive evaluation of their security measures, aiming to identify potential threats and vulnerabilities. In this paper, we presented a systematic and structured approach to conducting a threat model-based security analysis using the widely accepted STRIDE threat model and data flow diagrams (DFD) for AI-based coding tools. By establishing clear system boundaries and constructing detailed data flow diagrams, the data flow within the system is visually represented. Then, we applied the STRIDE threat modeling, encompassing spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege, to thoroughly examine potential threats and prioritize threats based on their impact, which allowed us to develop targeted mitigation strategies. Our threat model can provide organizations with a robust methodology to ensure the security of their AI-based coding tools and proactively address emerging risks to build a trustworthy coding environment.

How Secure is AI-based Coding?: A Security Analysis of AI-based Coding Tools Using STRIDE and Data Flow Diagrams (DFD) [Poster]

I presented a poster titled "How Secure is AI-based Coding?: A Security Analysis of AI-based Coding Tools" at the "College of Arts and Sciences AI Research Retreat" hosted by the University of Alabama at Birmingham. We utilized the STRIDE threat modeling framework and Data Flow Diagrams to systematically assess security risks in AI-based coding tools, identifying threats such as spoofing, tampering, and code injection. We proposed mitigation strategies to mitigate these risks, including access control mechanisms, encryption, and regular security audits. Emphasizing the importance of a comprehensive threat model, we discussed components such as assets, entry points, attacker model, threats, and mitigation strategies. Our presentation aimed to raise awareness about security challenges in AI-based coding tools and provide insights into effective mitigation strategies to safeguard against potential threats.