We began working on a video titled “ARITA in Your Daily Life” for the exhibition “ARITA: The Voice that Beautifies the World.”
The video features ARITA’s voice expressing contemporary Korean phrases commonly used in everyday life.
We believe that incorporating generative AI for motion and sound brings deeper meaning and resonance to the work.
Part 1 _ Image / Midjourney
After selecting images from Pinterest that matched each keyword,
image to image
prompt to image
image mix image
we used Midjourney to generate new visuals through three different methods.
Part 2 _ Motion / run way
Added motion to the generated images in Runway.
Image to video
image + prompt to video
This was done using two methods.
While the motion work process is highly efficient, we are still exploring ways to improve resolution and image saturation.
Part 3 _ Sound / Eleven Labs
The sound for the exhibition video was created by combining multiple voices.
We used Eleven Labs, an AI tool capable of generating realistic human voices, to produce and blend them seamlessly.
Korean
English
Chinese
Japanese
We extracted the sound using the tool.
Currently, only foreign voices can be selected, so the pronunciation of languages other than English sounded slightly unnatural.
Final Outcome
The ability to generate engaging and large-scale results in a short amount of time is one of the most appealing aspects of AI-based work.
Utilizing various AI tools greatly contributes to improving work efficiency.
Although multiple prompt attempts are often required to achieve results that closely match the intended vision,
we look forward to future updates that will make the process even easier and faster.