Revolutionizing 3D Modeling for Programmers with Visual Impairments

Denver, CO – A team of researchers from various universities and institutions, including the University of Washington, Purdue University, MIT CSAIL, The Hong Kong University of Science and Technology (Guangzhou), Stanford University, NVIDIA, University of Michigan, and University of Texas at Dallas, has introduced an innovative system called A11yShape. This system promises to transform the way blind and low-vision programmers interact with 3D modeling, a field that has historically been challenging for them.
3D Modeling, which requires complex spatial reasoning and visual feedback, has always been a significant obstacle for individuals with visual impairments. Existing tools lack support for non-visual interaction, making the creation and modification of 3D models virtually impossible without visual assistance. However, A11yShape, presented at the 27th ACM SIGACCESS International Conference on Computers and Accessibility (ASSETS ’25) in October 2025, emerges as a promising solution.
How A11yShape Works
A11yShape is an AI-assisted interactive 3D modeling system that integrates the code-based 3D modeling editor OpenSCAD with the advanced capabilities of Large Language Models (LLMs), such as GPT-4o. The system’s main innovation lies in its cross-representation highlighting mechanism, which synchronizes selections across multiple model representations: code, semantic hierarchy, AI-generated descriptions, and 3D renderings.
Key Features of A11yShape Include:
- Accessible Descriptions: Generates detailed and accurate textual descriptions of 3D models, validated through user studies.
- Version Control: Tracks iterative changes in models and code.
- Hierarchical Representation: Enables structured navigation of model components.
- Interactive Verification Loop: Allows BLV users to directly query and validate spatial attributes or design decisions.
Impact and Study Results
A multi-session study involving four BLV programmers demonstrated A11yShape’s effectiveness. After an initial tutorial session, participants independently created 12 distinct models during test sessions, achieving results that met their own satisfaction. The study revealed that participants were able to understand and modify 3D models autonomously—tasks that were previously unfeasible without visual assistance.
The workflows developed by the participants included incremental construction through AI verification loops, the use of semantic hierarchies for error correction, and the application of real-world metaphors to build mental models. The AI-generated descriptions were particularly valued, compensating for the lack of visual verification.

Although challenges such as the high cognitive load in interpreting textual descriptions and difficulty in understanding spatial relationships were observed, the findings point to promising directions for assistive technologies that empower BLV users to engage in inherently visual creative workflows.
Contributions and Future Directions
A11yShape represents the first AI-assisted 3D modeling system that leverages LLM-generated descriptions augmented with code, hierarchical component navigation, and interactive verification loops. Empirical insights from the user study highlight how BLV users overcome spatial cognition challenges and develop strategies to create 3D models without visual feedback.
This breakthrough not only opens doors for programmers with visual impairments in the field of 3D modeling but also points to a future where artificial intelligence can make complex creative domains more accessible to everyone, regardless of their visual abilities.
🔬 Reference:
Zhang, Z. (Jerry), Li, H., Yu, C. M., Faruqi, F., Xie, J., Kim, G. S-H., Fan, M., Forbes, A., Wobbrock, J. O., Guo, A., & He, L. (2025). A11yShape: AI-Assisted 3-D Modeling for Blind and Low-Vision Programmers. In The 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’25), Denver, CO, USA. ACM, New York, NY, USA. doi.org/10.1145/3663547.3746362 University of Michigan.