A second Google A.I. researcher says the company fired her.
Two months after the jarring departure of a well-known artificial intelligence researcher at Google, a second A.I. researcher at the company said she was fired after criticizing the way it has treated employees who were working on ways to address bias and toxicity in its artificial intelligence systems.
Margaret Mitchell, known as Meg, who was one of the leaders of Google’s Ethical A.I. team, sent a tweet on Friday afternoon saying merely: “I’m fired.”
Google confirmed that her employment had been terminated. “After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct,” read a statement from the company.
The statement went on to claim that Dr. Mitchell had violated the company’s security policies by lifting confidential documents and private employee data from the Google network. The company said previously that Dr. Mitchell had tried to remove such files, the news site Axios reported last month.
Dr. Mitchell said on Friday evening that she would soon have a public comment.
Dr. Mitchell’s post on Twitter comes less than two months after Timnit Gebru, the other leader of the Ethical A.I. team at Google, said that she had been fired by the company after criticizing its approach to minority hiring as well as its approach to bias in A.I. In the wake of Dr. Gebru’s departure from the company, Dr. Mitchell strongly and publicly criticized Google’s stance on the matter.
More than a month ago, Dr. Mitchell said that she had been locked out of her work accounts. On Wednesday, she tweeted that she remained locked out after she tried to defend Dr. Gebru, who is Black.
“Exhausted by the endless degradation to save face for the Upper Crust in tech at the expense of minorities’ lifelong careers,” she wrote.
Dr. Mitchell’s departure from the company was another example of the rising tension between Google’s senior management and its work force, which is more outspoken than workers at other big companies. The news also highlighted a growing conflict in the tech industry over bias in A.I., which is entwined with questions involving hiring from underrepresented communities.
Today’s A.I. systems can carry human biases because they learn their skills by analyzing vast amounts of digital data. Because the researchers and engineers building these systems are often white men, many worry that researchers are not giving this issue the attention it needs.
Google announced in a blog post yesterday that an executive at the company, Marian Croak, who is Black, will oversee a new group inside the company dedicated to responsible A.I.
Comments are closed.