Articles/Publications

Browse our curated selection of articles, reports, white papers, special issues and similar resources from across IEEE.
 
Discover even more ethics-related content at IEEE Xplore.

Gender and Technology: The Case of the Energy Sector

Energy policy is often formulated in a gender-neutral manner; that is, it is assumed by policy makers that women and men use and benefit equally from current energy systems. However, policies are incorrectly considered neutral by policy makers since they ignore the differential impacts they have on different genders and socioeconomic and cultural groups. Based on such false assumptions, the policies are less effective and/or have unintended effects. This white paper by the IEEE Dignity, Inclusion, Identity, Trust, and Agency (DIITA) Industry Connections Program discusses these issues.

Framing Cognitive Machines: A Sociotechnical Taxonomy

This paper proposes a taxonomy for the analysis of socio-economic disruptions caused by technological innovations. A transdisciplinary principled approach is used to build the taxonomy through categorization and characterization of technologies using concepts and definitions originating from cybernetics, occupational science, and economics. Concrete illustrations of concepts and uses are offered, including an Industry 5.0 case study as an application of the taxonomy.

Essential Skills for IT Professionals in the AI Era

Artificial Intelligence is transforming industries worldwide, creating new opportunities in health care, finance, customer service, and other disciplines. But the ascendance of AI raises concerns about job displacement, especially as the technology might automate tasks traditionally done by humans. This article presents some skills IT professionals need to stay relevant, including key insights into AI ethics.

Enhancing the Fairness and Performance of Edge Cameras with Explainable AI

The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. The research described in this paper presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. The approach helps identify model biases, essential for achieving fair and trustworthy models.

Custom Developer GPT for Ethical AI Solutions

This paper provides motivation for the need of a new software artefact: a custom Generative Pre-trained Transformer (GPT) for developers to discuss and solve ethical issues through AI engineering. The use of such a tool can allow practitioners to engineer AI solutions which meet legal requirements and satisfy diverse ethical perspectives. The paper details the idea and demonstrates a use case.

Do Generative AI Tools Ensure Green Code? An Investigative Study

The sustainability or ’greenness’ of software is typically determined by the adoption of sustainable coding practices. Despite their potential advantages, there is a significant lack of studies on the sustainability aspects of AI-generated code. Specifically, how environmentally friendly is the AI-generated code based upon its adoption of sustainable coding practices? This paper presents the results of an early investigation into the sustainability aspects of AI-generated code across three popular generative AI tools — ChatGPT, BARD, and Copilot.

Ethical Design and Implementation of AI in the Field of Learning and Education: Symmetry Learning Technique

This paper seeks to look into the nuanced role that leveraging AI technologies play in a manner that augments primary education while, at the same time, ensuring in the process protection to children’s privacy rights.

Bridging Deep Tech Ethics, Community Literacy, and Computer Science Education

This paper presents an assignment that leads students through a process of self-reflection, ethical analysis, and collaborative inquiry in a master’s level computer science course that blends professional communication and computing ethics.

Generative AI Has a Visual Plagiarism Problem

The degree to which large language models (LLMs) might “memorize” some of their training inputs has long been a question. This guest post in IEEE Spectrum discusses how LLMs are in some instances capable of reproducing, or reproducing with minor changes, substantial chunks of text that appear in their training sets.