Artificial Intelligence in the Modern Workplace: AI Governance for Software Development

Home/Artificial Intelligence in the...
Artificial Intelligence in the Modern Workplace: AI Governance for Software Development
Artificial Intelligence in the Modern Workplace: AI Governance for Software Development Admin CG October 17, 2023

This edition of Artificial Intelligence in the Modern Workplace focuses on key issues facing employers whose employees may use artificial intelligence (AI) platforms, such as ChatGPT or GitHub Co-Pilot when developing company software, whether for internal or external use.

Software Development in a Nutshell

At a basic level, the process of creating, designing, deploying, and supporting software is often referred to as the process of software development. Software is a set of instructions that tells a computer what to do. There is also embedded software that is typically software used to control machines and devices that are not considered general use computers. These devices can be mission critical when implemented in healthcare or critical infrastructure. For example, a special purpose pacemaker device often includes embedded software. 

Generally, within a company, there is a team of people working in different roles in the software development process: (1) programmers or coders who write the source code to program the computers; (2) software engineers who build software and systems to solve problems in a more general way; and (3) software developers who typically develop specific projects and/or drive the overall software development lifecycle. As AI has become more integrated into the workplace, employees in these software development roles have begun using AI programs to increase their efficiency and aid in the software development and updating process.

AI Involvement in Software Development

AI has undoubtedly changed the way software developers work. While AI can certainly increase a developer’s efficiency, it does not go so far as to moot the need for human developers. AI tools like ChatGPT and GitHub Co-Pilot allow developers to save time when generating boilerplate code and spend that additional time on generating value in the software they develop. In this way, the developer’s time, and the employer’s money, are spent on adding value and allowing a company to stand apart from its competitors.

This does not mean that these programs should be a one-stop shop for developers. Human oversight is still vital to successful and secure code. There are multiple risks that must be considered when using AI programs to assist in developing code. First, there is the risk that the code will contain errors. While these AI programs may add efficiency, it is still necessary for a human developer to review the code produced. The code will only be as good as the dataset the program uses and the information it is fed.

Second, employers should consider security issues. Just as their software developers have access to these programs, so do hackers and attackers who may exploit the capabilities of AI tools in an attempt to gain improper access to a system. It is important to create an internal policy that prohibits employees from entering things such as credentials or tokens into the code directly.  Likewise, developers should make sure that no malicious code is injected into the code base when using AI-generated code.

Third, there are intellectual property concerns. As discussed in the previous edition of this series, no proprietary information should be entered into any AI generation tool that other users may access or that the AI Engine may use to train its AI model. Entering confidential and proprietary information could void trade secret protection and potentially allow any user to access confidential information. Without the proper policies in place, there is a risk that an employee may inadvertently push confidential information into one of these platforms. Not only are there risks of exposing confidential information, but there is also a risk of generating code that infringes upon copyrighted code. Employers can implement policies to mitigate the risk of copyright infringement by, for example, outlining which open-source code is allowed, prohibiting the use of generated code on core IP, or preventing their developers from copying the full code directly from the generating program.

A Cautionary Tale

In April 2023, Samsung Electronics banned the use of ChatGPT and other AI-powered chatbots by its employees following concerns that sensitive internal information was leaked by employees on these platforms. This action was reportedly taken after an employee uploaded sensitive internal source code to the ChatGPT platform. Samsung’s concern stemmed from the fact that any data shared with any AI-powered chatbot was stored on the platform’s servers, which allowed other users to access the confidential information.

While Amazon and JPMorgan have implemented similar warnings to their employees, companies such as Goldman Sachs and IBM have stated that they will begin using AI tools to support certain business tasks. As the use and capabilities of AI evolve, it is necessary for employers to consider implementing policies that can advance with the changing landscape. Implementing preemptive guardrails can allow employers to proactively manage how AI can be used in the workplace, reducing the risks associated with unmonitored AI use, and increasing its positive impact to their business.


PUBLISHING PARTNERS

Tags