As one quite commercially successful author once wrote, “The night is dark and full of fear, and the day is light and beautiful and full of hope.” This is the image fit for AI, like all technology All the same, it has its pros and cons.
Artistic generative models such as stable diffusionfor example, led to an incredible outpouring of creativity, driving application and even Brand new business model. On the other hand, its open source nature allows bad actors to use it to create deep fake Large scale – all the time Artists protest it profiting from their work.
Where is AI headed in 2023? Will regulation contain the worst-case scenario AI brings, or will it open the floodgates? Will powerful, transformative new forms of artificial intelligence emerge? Chat GPTdisrupting an industry once considered immune to automation?
Expect more (questionable) art-generating AI applications
along with Lenza, the AI-powered selfie app from Prisma Labs is all the rage, and you can expect plenty of similar apps.And hope they also have the ability to be cheated Create NSFW imagesand Sexualize and change disproportionately The appearance of women.
Maximilian Gahntz, a senior policy fellow at the Mozilla Foundation, said he expects the integration of generative AI into consumer technology to amplify the impact of such systems, both good and bad.
Stable Diffusion, for example, fed billions of images from the internet until it “learned” to associate certain words and concepts with certain images. Text generation models are often easily tricked into supporting offensive viewpoints or producing misleading content.
Mike Cook, a member of the Knives and Paintbrushes Open Research Group, agrees with Gahntz that generative AI will continue to prove a major and problematic transformative force. But he thinks 2023 has to be the year that generative AI “finally puts its money where its mouth is.”
Prompted by TechCrunch, Modeled by Stability AI, generated in the free tool Dream Studio.
“It is not enough to incentivize the expert community [to create new tech] — For technology to be a long-term part of our lives, it has to either make someone a lot of money or have a meaningful impact on the public’s daily lives,” Cook said. “So I predict we’ll see a big push to generate Formal AI actually does one of those two things, with varying degrees of success. “
Artists lead effort to opt out of datasets
deviant art freed An AI art generator based on Stable Diffusion and fine-tuned based on artwork from the DeviantArt community. The art generator has faced backlash from longtime DeviantArt users, who have criticized the platform for its lack of transparency in using the art they upload to train the system.
The creators of the most popular systems — OpenAI and Stability AI — say they have taken steps to limit the amount of harmful content their systems produce. But judging by the many generations on social media, it’s clear there’s still work to be done.
“Datasets need to be actively curated to address these issues and should be subject to intense scrutiny, including from communities that tend to lose out,” Gahntz said, comparing the process to the ongoing controversy over content moderation in social media .
Stability AI, which primarily funded the development of Stable Diffusion, recently bowed to public pressure, suggesting it would allow artists to opt out of the dataset used to train the next generation of Stable Diffusion models. Through the website HaveIBeenTrained.com, rights holders will be able to request an opt-out before the training begins in a few weeks.
OpenAI does not offer this opt-out mechanism, preferring instead to work with organizations such as Shutterstock to license parts of its image library. but given legal And the sheer publicity headwind it faces with Stability AI, it might only be a matter of time before it follows suit.
Courts may eventually enforce it.In the US, Microsoft, GitHub, and OpenAI are prosecute The class action accuses them of violating copyright law by letting Copilot (GitHub’s service for intelligently suggesting lines of code) duplicate license sections of the code without giving credit.
Perhaps anticipating the legal challenge, GitHub recently added settings to prevent public code from appearing in Copilot’s suggestions, and plans to introduce a feature that will cite the source of code suggestions. But they are imperfect measures.at least in one examplethe filter setting causes Copilot to emit a large amount of copyrighted code, including all attributes and license text.
Expect growing criticism in the coming year, especially as Britain considers rules that would remove the requirement that systems trained on public data be used strictly for non-commercial use.
Open Source and Decentralization Efforts Will Continue to Grow
In 2022, a handful of AI companies will take the stage, primarily OpenAI and Stability AI. But the pendulum could swing back toward open source in 2023, as the ability to build new systems outgrows what Gahntz calls “well-resourced and capable AI labs.”
The community approach could lead to more scrutiny of the system as it is built and deployed, he said: “If the models are open, if the datasets are open, that will enable more critical research that points to many of the pitfalls and hazards associated with generative AI that are often difficult to conduct.”

Image credits: OpenFold results compared to DeepMind’s AlphaFold2, an open-source AI system for predicting protein shapes.
Examples of such community-focused efforts include Large Language Models from EleutherAI and BigScience, an effort backed by AI startup Hugging Face. Stability AI is funding many of the communities themselves, such as those centered around music generations Harmony with Open BioMachine Learninga loose collection of biotechnology experiments.
Training and running complex AI models still requires money and expertise, but as open source efforts mature, decentralized computing could challenge traditional data centers.
BigScience has taken a step toward enabling decentralized development with the recently released open-source Petals project. Petals lets people contribute their computing power, similar to Folding@home, to run large AI language models that typically require high-end GPUs or servers.
“Modern generative models are computationally expensive to train and run. Some rough estimates put ChatGPT spending at about $3 million per day,” Chandra Bhagavatula, a senior research scientist at the Allen Institute for Artificial Intelligence, said via email. “To make it commercially viable and more widely available, it’s important to address this issue.”
However, Chandra noted that as long as methods and data remain proprietary, large labs will continue to have a competitive advantage.In a recent example, OpenAI published Point E, a model that can generate 3D objects given text cues. However, while OpenAI open-sourced the model, it did not disclose the source of the Point-E training data or publish it.

Point-E generates point clouds.
“I do think open source and decentralization efforts are absolutely worthwhile and benefit more researchers, practitioners, and users,” Chandra said. “However, despite being open source, the best models are still not available to a large number of researchers and practitioners due to resource constraints.”
AI companies brace for upcoming regulations
Regulations such as the EU AI Act could change the way companies develop and deploy AI systems. So could more local initiatives such as New York City’s AI Recruiting Regulation, which requires bias audits of AI and algorithm-based technologies used to recruit, hire, or promote before they are used.
Chandra thinks these regulations are necessary, especially given the increasingly obvious technical shortcomings of generative AI, such as its tendency to spit out factually wrong information.
“This makes generative AI difficult to apply to many areas where mistakes can be costly — such as healthcare. Also, the ease of generating incorrect information presents challenges with misinformation and disinformation,” she said. “[And yet] AI systems are already making decisions fraught with moral and ethical implications. “
Next year, though, brings only the threat of regulation — expect more debate over rules and court cases before anyone is fined or prosecuted. But companies may still vie for placement in the most favorable categories in upcoming laws, such as the risk categories of the AI Act.
The currently written rules classify AI systems into one of four risk categories, each with different requirements and levels of scrutiny. Systems falling into the highest risk category, known as “high risk” AI (e.g. credit scoring algorithms, robotic surgery applications), must meet certain legal, ethical and technical criteria before being allowed to enter the European market. The lowest risk category, “minimal or no risk” AI (e.g. spam filters, AI-enabled video games) only imposes transparency obligations, such as making users aware that they are interacting with the AI system.
Os Keyes, a Ph.D. candidate at the University of Washington, expressed concern that companies would aim for the lowest level of risk to minimize their own liability and visibility to regulators.
“Leave that worry aside, [the AI Act] That’s really the most positive thing I’ve seen on the table,” they said. “I haven’t seen much anything leave Congress. “
But investment is uncertain
Gahntz believes that even if an AI system works well for most people, it can be very harmful to some people, and “there is still a lot of homework to be done” before companies can use it widely. “There’s also a business case for all of this. If your model generates a lot of mess, consumers won’t like it,” he added. “But obviously it’s also about fairness.”
It’s unclear whether companies will be persuaded by next year’s arguments, especially since investors seem eager to put money beyond any promising generative artificial intelligence.
Amid Stable Diffusion Controversy, Stability AI raised $101 million at a valuation of over $1 billion from high-profile backers including Coatue and Lightspeed Venture Partners. OpenAI is Say Valued at $20 billion upon entry advanced talk Raise more money from Microsoft. (before Microsoft invest OpenAI invested $1 billion in 2019. )
Of course, these may be exceptions to the rule.

Image credits: jasper
Outside of self-driving companies Cruise, Wayve and WeRide and robotics company MegaRobo, the best-performing AI companies in terms of funding raised this year are all software-based, according to Crunchbase. content squareThe company, which sells a service that provides artificial intelligence-driven recommendations for web content, raised $600 million in July. unicornwhich sells software for “conversational analytics” (think call center metrics) and conversational assistants, log in It was $400 million in February. at the same time, highlightswhose artificial intelligence platform provides real-time and data-driven recommendations to sales representatives and marketers, caught It was $248 million in January.
Investors may chase safer bets, such as automatically analyzing customer complaints or generating sales leads, even if these are not as “sexy” as generative AI. That’s not to say there won’t be big, eye-catching investments, but they’ll be reserved for impact players.