THE SCOOP As global regulators increasingly scrutinize artificial intelligence, massive consulting firm Accenture is testing a startup’s technology in what could become a standard method of complying with rules to ensure responsible and safe innovation. Los Angeles-based EQTY Lab created a new method, employing cryptography and blockchain, to track the origins and characteristics of large language models, and provide transparency on their inner workings, so companies and regulators can easily inspect them. The idea is to automatically examine the model as it is being created, rather than focus on its output. “What we’re doing is creating certainty,” that a model works the way it was intended, said EQTY Lab co-founder Jonathan Dotan. “Right now, we’ve done the first step, which is proving this is possible.” EQTY Lab’s AI Integrity Suite is being evaluated in Accenture’s AI lab in Brussels to see if the software could be scaled to serve the firm’s thousands of clients, many of whom are in the Fortune 100. The work is being done as countries propose ways to address the promise and risks of AI. On Thursday, the U.S. Commerce Department announced a consortium of more than 200 tech companies, academics and researchers that will advise the new government AI Safety Institute, which will develop “red team” testing standards and other guidelines directed by a White House executive order on AI last year. “Responsible AI is absolutely critical. It’s on top of everyone’s mind,” said Bryan Rich, senior managing director for Accenture’s Global AI Practice. “But how do you go from talking about responsible AI to actually delivering it?” EQTY LabREED’S VIEW The White House AI executive order and other regulations, including those proposed in Europe, make it seem like watchdogs see AI models as the kinds of products that can be inspected and stamped as either safe or unsafe, like a car or a consumer gadget. In reality, large language models are like a perpetual stew, with ingredients from many places constantly thrown in together. Also, the idea of using just one model is antiquated. For a single AI product, more models are increasingly being employed as developers glom together specially trained ones to carry out specific tasks. We’re already likely approaching a place in which it will be difficult and time consuming for companies to vet every AI model they use. That’s why, in theory, EQTY’s idea makes sense: A cryptographic signature would allow developers to retain trade secrets while simultaneously offering some transparency into how the models were put together. For instance, Meta’s Llama 2 model does not disclose the contents of the data that was used to train it. That’s led to tension as the company faces lawsuits alleging it violated copyright law by including protected work in its training data. Let’s say that Meta, in a purely hypothetical scenario, wanted to prove that a specific set of copyrighted work was not included in the data. EQTY says it is developing a way that Meta could prove that without having to divulge the entire training set. Read here for a Room for Disagreement on whether tracking AI models makes sense. → |
|