Apple’s research department is pitching a prototype of a new generative AI animation tool ‘Keyframer,’ that enables adding motion to 2D
images with prompts.
Apple is keen on exploring large language models (LLMs) in
animation for their potential, just like in text and image generation. Earlier,
Apple introduced Human Gaussian Splats (HUGS) that creates animation-ready
human avatars from video clips, and MGIE that edits images using text prompts,
among its latest generative AI projects
In a research paper that the company published last week, it
explains that Keyframer is powered by OpenAI’s GPT4 model and collects data in
the form of Scalable Vector Graphic (SVG) files. It then produces a CSS code that
animates the image based on a text-based prompt. These prompts can be anything
that describe how the animation must look like, e.g. “make the frog jump.”