The 'iterative frame' describes a space in which predictive analytics can be inscribed on the body and, thus, narrated. As the volume of video increases, creators turn to software to automate some of the editing processes. This creates a new dynamic between production and post-production in which behavior is increasingly interpreted by humans and classification algorithms alike.

Machine learning is increasingly used in both the production and consumption of video. As this technology spreads to more areas of cultural production, it will continue to model our understanding of the social world. Synthetic videos can mimic certain events (e.g., birthdays and weddings), as they generally occur within a limited frame of identifiable gestures.

Predictive, automated video editing creates a space in which our evolving relationship to machines might be embodied, and therefore better understood. So long as these dynamics remain disembodied, they create a low hum of anxiety. The protocols of video analytics and database hierarchies are already projecting themselves onto everyday life. As these patterns become easier to index and parse, our performance for them becomes, perhaps inevitably, ritualized. I study this space as a microcosm of the broader shifts in social behavior coded by interpreting machines.