Credit: Mimi Phan / n_defender / Shutterstock
Businesses and individual users alike are grappling with how to use generative artificial intelligence in responsible and beneficial ways. To help guide them, researchers at the MIT Initiative on the Digital Economy are looking at how AI is being developed and used and exploring its potential and limitations.
At the 2024 MIT IDE Annual Conference in May, researchers shared insights and updates about their work. Topics ranged from quantum computing and responsible data use to how generative AI learns, how it affects hiring, and how it can help fight disinformation.
A new report from the conference offers a closer look at some of the researchers’ key findings. Among them:
1. People have complicated perceptions of AI-generated content.
As generative AI is increasingly used to create content, researchers are looking to understand how this content is perceived. According to a study by MIT Sloan senior lecturer and MIT Sloan postdoc Yunhao “Jerry” Zhang, humans generally express a preference for content created by humans. Yet when people were presented with examples of AI-generated and human-created content, people did not express an aversion to AI-generated content. When people were not told how content was created, they ended up preferring AI-generated content.
Read the research: “Human Favoritism, Not AI Aversion”
Watch the conference session: “Human-First AI”
2. Data provenance is increasingly important.
AI models are trained on data — and it’s important to understand how that data was collected. Otherwise, it could be inappropriate for an application or have been gathered illegally, or it might not include the right information. This is why a group of researchers, including and others from MIT, have collaborated on the Data Provenance Initiative, which audits the datasets used to train large language models. Another project, the Data Provenance Explorer, lets users select different criteria for — and see information about — data they might use.
Watch the conference session: “Building a Distributed Economy”
3. The democratization of AI has a long way to go.
AI research used to be evenly divided between academia and industry. This is no longer the case, according to a team of researchers that includes MIT research scientist and postdoc Nur Ahmed. They found that over the past decade, industry has gained the upper hand when it comes to computing power and access to data, making it easier for businesses to hire talent, develop AI benchmarks, and invest in research. But that also means that industry is influencing the direction of basic AI research, raising concerns about whether future AI developments will be in the public interest.
Read the research: “The Growing Influence of Industry in AI Research”
Watch the conference session: “Artificial Intelligence, Quantum, and Beyond”
4. Companies managed by “geeks” are more agile than traditional organizations.
In his new book, “The Geek Way,” IDE co-director looks at how geeky companies such as Netflix successfully developed new management techniques. Geek companies “move faster, are a lot more egalitarian, give a great deal of autonomy, and try to settle their arguments via evidence,” McAfee said.
Read more about the research: “New Book Explains the ‘Geek Way’ to Run a Company”
Watch the conference session: “Technology-Driven Organizations and Culture”
5. Job loss from AI might not be as bad as some feared — at least, not right away.
In another study co-authored by Thompson, the researchers created a new AI task automation model to more accurately predict the pace of automation. Looking specifically at computer vision, they found that technical and cost barriers could leave about three-quarters of jobs unchanged in the near term. In the short term, Thompson said, businesses can perform cost-benefit analyses to determine which tasks it would make sense to automate with AI.
Read the research: “Beyond AI Exposure”
Watch the conference session: “Artificial Intelligence, Quantum, and Beyond”
Watch all the conference sessions videos