Christmas Sales! Everyone can enjoy a 30% OFF on Mocap Suit and Mocap Gloves & FREE Shipping Worldwide.

As AI rewrites the rules for creating content, human creators benefit from it.

Generative AI programs can free human creators from tedious tasks, allowing them to focus on ideas and creative thinking.
For years, the 150-year-old Colorado State Fair hosted little-known fine art competitions. But when the 2022 winners were announced in August, this little-known local event immediately sparked controversy around the world. The judges chose AI-generated Théâtre D’opéra Spatial by AI artist Jason Allen as the winner in the digital category. The decision sparked a flurry of criticism on Twitter, with some calling it the “death of art”. Others expressed concern that technology could one day put artists out of work.
Until recently, machines, traditionally considered predictable and devoid of spontaneity, were difficult to associate with creativity. However, artificial intelligence (AI) has brought the creative industries to a turning point: AI-powered machines are becoming an important part of the generative and creative process. And Allen’s work, which, as the title suggests, depicts a surreal “space opera” scene, not only demonstrates the capabilities of modern imaging machines, but also represents their potential to enhance human creativity.
Among the AI-related technologies that have emerged over the past few years is generative AI, a deep learning algorithm that allows computers to generate raw content such as text, images, video, audio, and code. Demand for such content is likely to skyrocket in the coming years—Gartner predicts that generative AI will account for 10 percent of all data generated by 2025, up from 1 percent in 2022.
“Théâtre D’opéra Spatial” is an example of AI-generated content (AIGC) created with the Midjourney text-to-art generator program. In 2022, there will also be several other AI-powered art creation programs capable of creating paintings from single-line text prompts. The variety of techniques reflects a wide range of artistic styles and different user needs. For example, DALL-E 2 and Stable Diffusion focus on Western-inspired art, while Baidu’s ERNIE-ViLG and Wenxinyige create images inspired by Chinese aesthetics. At Wave Summit+ 2022, Baidu’s deep learning developer conference, the company announced that Wenxinyige has been updated with new features, including converting photos into AI-generated art, image editing, and one-click video creation.
At the same time, AIGC may also include articles, videos, and various other media products such as speech synthesis. Speech synthesis is a technique for creating audible speech that is indistinguishable from the speaker’s original voice and can be applied in many scenarios, including digital map voice navigation. For example, Baidu Maps allows users to customize voice navigation by recording nine sentences with their voice.
Recent advances in artificial intelligence technologies have also led to the creation of generative language models that allow you to quickly compose text with one click. They can be used for copywriting, document processing, resume extraction, and other word processing tasks, unlocking creativity that other technologies such as speech synthesis can’t. One of the leading generative language models is Baidu’s ERNIE 3.0, which is widely used in various industries such as medicine, education, technology, and entertainment.
“Over the past year, artificial intelligence has taken a huge leap in changing its technical direction,” said Robin Li, chief executive of Baidu. “AI has moved from understanding images and text to creating content.” make short videos with voice acting based on the data provided in the article.
As AIGC becomes more widespread, it can increase the efficiency of content creation by freeing creators from repetitive and time-consuming tasks such as organizing source assets and recordings and rendering images. For example, aspiring filmmakers have long been paying the price of spending countless hours learning the complex and tedious process of video editing. AIGC may not need to do this anytime soon.
In addition to improving efficiency, AIGC can also help grow content creation businesses as the demand for personalized digital content that users can interact with is growing dynamically. InsightSLICE predicts that the global digital creativity market will grow at a CAGR of 12% annually between 2020 and 2030 to reach $38.2 billion. As the rate of content consumption rapidly outpaces production, traditional development methods can struggle to keep up with this growing demand, creating a gap that AIGC can fill. “Artificial intelligence could meet the huge demand for content ten times cheaper and hundreds or thousands of times faster in the next decade,” Li said.
AIGC can also be used as an educational tool to help kids develop their creativity. For example, StoryDrawer is an AI-powered program designed to develop children’s creative thinking, which tends to decline as the focus of their education shifts to rote learning.
The program, developed by Zhejiang University using Baidu artificial intelligence algorithm, stimulates children’s imagination through visual stories. When a child describes an imaginary image to the system, it generates an image from the description while providing verbal cues to encourage and inspire the child to expand the image. This is based on the belief that children develop their creativity better when they talk and draw than when they just draw alone. As the team continued to develop the program, they saw the great potential of StoryDrawer in helping children with autism develop their speaking and descriptive skills.
At the heart of StoryDrawer is the Chinese proverb “Yiren Oriented,” which means “putting people first.” This proposal prompted a team at Zhejiang University to develop an AI-assisted art creation system. They believe that any development of artificial intelligence should be about empowering people, not replacing them, which is a key value that is key to unlocking the true potential of a promising but often misunderstood technology.
Looking to the future, Li Yanhong identifies three main stages in the development of artificial intelligence. The first is the “assistance phase” where AI helps people create content such as audiobooks. This is followed by a “collaboration phase” where the AIGC appears as a virtual avatar coexisting with the creator in reality. The last stage is the “initial stage”, when artificial intelligence independently generates content.
As with every new technology, a decision has yet to be made on how AIGC will be fully revealed and developed. While there are many uncertainties, history shows that any new technology rarely completely replaces its predecessor. When the camera was first invented in the 1800s, it was criticized by many for treating photographs as unrealistic, as automated systems seemed to replace skilled artists with years of realistic drawing experience. However, painting remains the cornerstone of the art world today.
Just as past technology helped spread art beyond the realm of a privileged few, AIGC’s accessibility will open up creativity to many more people, allowing them to participate in the creation of valuable content. In doing so, AIGC challenges long held notions of art while also redefining what it means to be an artist.
The galaxy should help scientists. Instead, he unconsciously spits out biased and incorrect nonsense.
The new version of AlphaZero has unlocked a faster way to do matrix multiplication, a problem at the heart of computing that affects thousands of everyday computer tasks.
Online video is a vast untapped source of training data, and OpenAI says it has a new way to use it.
We’re having trouble saving your settings. Please try refreshing this page and refresh them again. If you continue to receive this message, please contact us at customer-service@technologyreview.com with a list of the newsletters you would like to receive.


Post time: Nov-30-2022