Apple’s research department is pitching a prototype of a new generative AI animation tool ‘Keyframer,’ that enables adding motion to 2D
images with prompts.
Apple is keen on exploring large language models (LLMs) in
animation for their potential, just like in text and image generation. Earlier,
Apple introduced Human Gaussian Splats (HUGS) that creates animation-ready
human avatars from video clips, and MGIE that edits images using text prompts,
among its latest generative AI projects
In a research paper that the company published last week, it
explains that Keyframer is powered by OpenAI’s GPT4 model and collects data in
the form of Scalable Vector Graphic (SVG) files. It then produces a CSS code that
animates the image based on a text-based prompt. These prompts can be anything
that describe how the animation must look like, e.g. “make the frog jump.”
With the prompts, you can also generate multiple animation
designs at a time, as well as adjust properties of the animated image like
color codes and animation durations, in a separate window. What’s more is that
you don’t have to have any prior experience with coding in order to use the
tool, since the applied changes are automatically converted into CSS.
Unlike other types of AI-generated animation that usually
requires a user to have coding experience and run multiple applications simultaneously
to get the work done, Keyframer’s technology is quite simple.
This is only the beginning of exploring Keyframer’s potential, as it isn’t publicly out yet. However, just like any other AI tool, Keyframer has its limitations too. For instance, it does not generate high-quality animations like the ones seen in video games and movies. Rather, its capabilities for now are limited to web-based animations like loading sequences, data visualization, and animated transitions.