What It’s Like to Work As an AI Ethicist

0
9

AI is taking over—or, at least, that’s what many headlines suggest. Between replacing jobs, spreading misinformation online and the (currently unfounded) threat of AI leading to human extinction, there are plenty of concerns around the ethical and practical uses of AI.

It’s a topic on many people’s minds. A 2023 KPMG report on AI found only two in five people believe current government and industry regulations, laws and safeguards are enough to make AI use safe. Here, we speak to Paula Goldman, the first-ever chief ethical and humane use officer for software company Salesforce, about why AI needs human oversight, how the tech can actually be used for good and the importance of regulation.


In simple terms, what do you do in your job?

I work to make sure that the technology that we produce is good for everyone. In more practical terms, my role has three parts.

One of them is working with our engineers and product managers, and looking at the plans that we have for our AI product, Einstein, and spotting any potential risks. This includes making sure that we’re building safeguards into our products to help people use them responsibly, to help anticipate consequences and make sure they’re being used for good.

The second part is working with our in-house policy group, which does things like developing our new AI acceptable use policy, which basically guardrails for how products should get used. And then finally, I work on product accessibility and inclusive design because we want our products to be usable by everyone.

Related: AI Is Transforming Office Communications. Here’s What Two Experts Want Employers to Know.

Your AI product, Einstein, does many things, from generating sales emails to analyzing businesses’ customer data so they can recommend products and better engage target demographics. How do you define ethical and humane use of your AI?

When you think about technology ethics, it’s the practice of aligning a product to a set of values. We have a set of AI principles that we put out recently, and then we revised them and put out a new set of guidelines for generative AI, because it presented a new set of risks.

In the case of generative AI, for example, one of the top principles is accuracy. We know accuracy is very important for generative AI in a business setting, and we’re working on things within the product to make sure that people are getting relevant and accurate results. For example, “dynamic grounding,” which is where you direct a large language model to answers using correct and up-to-date information to help prevent “AI hallucinations,” or incorrect responses. With generated AI models, when you direct them to a set of data and say, “The answer is not in this data,” you get much more relevant and accurate results. It’s things like that: How do you define a set of objectives and values, and work to make sure that a product aligns with them.

Tech leaders like Sam Altman, Elon Musk and Mark Zuckerberg met in Washington last September to talk AI regulation in a closed-door meeting with lawmakers. Are there enough people like you in those conversations, people who are concerned with ethical and humane use of AI?

Could there ever be enough? Though there are a lot of risks—like bias and not extending safeguards across different countries—at this moment in time for AI, one of the things that’s different than, say, five years ago, is that the public conversation is really cognizant of those risks. Unlike 10 years ago, we have like a whole host of folks considering ethics in AI right now. Does there need to be more? Yes. Does it need to be absolutely mainstream? Yes. But I think it’s growing. And I have been heartened to see a lot of those voices in the policy conversations as well.

Well, Salesforce is one of the several companies including OpenAI, Google and IBM who have voluntarily pledged AI safety commitments and adhere to a set of self-imposed standards for safety, security and trust. How do you think other leaders in this space are implementing those safeguards in comparison to what you’re doing?

On the one hand, there is something of a community of practice across different companies and we’re very active in cultivating that. We host workshops with our colleagues to trade notes and sit on a number of ethical AI advisory boards across the world. I am on the national committee that advises the White House on AI policy, for example.

On the other hand, I would say the enterprise space and the consumer space are very different. For example, we have a policy group and set out to develop an AI acceptable use policy. To my knowledge, that is the first of its kind for enterprise. But we do that because we feel we have a responsibility to put a stake in the ground and to have early answers about what we think responsible use looks like, and evolve it over time as needed. We hope that others follow suit, and we hope that we will learn from those that do, because they may have slightly different answers than us. So there’s a collaborative spirit, but at the same time, there are no standards yet in the enterprise space—we’re trying to create them.

The conversations around the concerns and potential of AI are evolving quickly. What’s it like working in this space right now?

There’s a shared feeling amongst AI leaders that we’re collectively defining something that’s very, very important. It’s also moving very fast. We are working so hard to make sure that whatever products we put out are trustworthy. And we’re learning. Every time models get better and better, we’re examining them: What do we need to know? How do we need to pivot our strategies?

So it’s really energizing, inspiring and hopeful, but also, it’s going really fast. I’ve been at Salesforce for five years, and we’ve been working on building infrastructure around AI for that time. Sometimes you get a moment in your career where you’re like, “I’ve been practicing baseball for a long time. Now, I get to pitch.” It feels like that. This is what we were preparing for, and all of a sudden, the moment is here.

What’s one thing you’re really excited about when it comes to AI’s potential?

There’s benefits around AI being able to detect forest fires earlier, or detect cancer, for example. A little closer to the work I do, I’m very excited about using AI to improve product accessibility. It’s early days, but that’s something that’s very near and dear to my heart. For example, one of the models our research team is working on is a code-generation model. As we’re continuing to finetune this model, we’re looking at patterns of code for accessibility. You can imagine a future state of this model, where it nudges engineers with a prompt like, “Hey, we know that code is not accessible for people with low vision, for example, and here’s how to fix it.” That can make it so much easier to just build things right the first time.

There’s lots of fear around AI and job loss, but where do the job opportunities exist?

I can imagine for someone that’s not involved in this space that it could seem daunting, like, “Oh, this technology is so complex,” but we—AI start-ups, tech companies and AI leaders—are collectively inventing it together. It’s really like the first inning of the game. We need many diverse perspectives at the table. We definitely need more AI ethicists, but I think we also need to build that consciousness across the board. I am really passionate, for example, about working with our ecosystem around how we scale up and implement technology responsibly. It’s a perfect time to get involved in this work.

LEAVE A REPLY

Please enter your comment!
Please enter your name here