There’s something profoundly human in looking back.
In remembering who we were, how we dressed, how we imagined the future — and realizing how much of it has quietly become the present. It’s in moments like these that we tend to return to the roots.
👁️
A World Seen Through Algorithms
We now inhabit environments saturated with cameras and computer vision systems. These systems parse reality through bounding boxes and probability thresholds. Among them, object detection models are trained to locate and classify elements within images, frame by frame, assigning labels such as “person”, “car”, or “animal”.
In this landscape, being visible means being processed. And clothing — once a passive surface — becomes an active participant in this exchange.
🧠🧵
From Adversarial Attacks to Wearable Textures
Adversarial attacks on neural networks consist of small visual perturbations capable of altering a model’s predictions. In object detection, this can mean preventing an object from being recognized altogether, or causing it to be misclassified.
Cap_able’s research focuses precisely on this domain: developing methods that generate images which, when translated onto fabric, produce adversarial textures robust enough to function in real-world conditions. When a subject wearing such garments appears in a scene, the detection system may fail to recognize them as a “person”, despite their clear presence to the human eye.
🧩🎛️
Adversarial Patching and the Role of Parametric Design
The technological foundation of Cap_able garments lies in adversarial patching — a method that generates visual patterns optimized to disrupt computer vision models. Our approach extends this principle through parametric textile pattern design, where each pattern is defined by controlled variables: shapes, colors, gradients, and spatial rules.
This enables both efficiency and design control over the visual outcome. Unlike non-parametric approaches, which often yield visually unconstrained results, parametric design allows the garment to perform its function without abandoning intention, taste, or cultural language.
🔬
State of the Art: A Comparative Perspective
Several research projects have explored the application of adversarial attacks to wearable items, each approaching the problem from a different angle.
AdvHat, for example, focuses on face recognition rather than object detection. Its goal is not to hide the presence of a person, but to prevent correct identification by attacking biometric models. The attack is constrained to a small rectangular region, which limits robustness across viewpoints and distances. Moreover, the optimization process is non-parametric, reducing both flexibility and aesthetic control.
Invisibility Cloak adopts a broader strategy by generating patterns that cover the entire garment and remain effective under changes in pose and viewpoint. While this improves robustness in physical settings, the lack of parameterization limits meaningful control over the visual outcome.
Accessorize to a Crime shifts focus again, targeting biometric feature recognition through eyeglass frames. Rather than object detection, it attacks identity recognition systems. As with the previous examples, it relies on non-parametric pattern generation, limiting adaptability and aesthetic integration.
🌐
Beauty as a Form of Resistance
Cap_able demonstrates that aesthetic research and functional resistance can coexist — creating garments that speak both to human sensibility and to algorithmic environments.
In a world where privacy is increasingly abstract, Cap_able brings it back into the tangible realm. Into fabric, pattern, color. Into something worn — and consciously chosen.