A seamless spatial workflow that transforms real rooms into testable futures—guiding users from capture to full-scale validation.
Multimodal Interaction System
Spalce combines gaze, gesture, and voice with real-world environmental awareness to create a low-friction, spatially intuitive workflow for home design.
Accessible Entry Options
Designed to support different preferences, abilities, and situations.
Look at the floating AI orb and pinch to open it.
Say: “Hi Spalce—assist me with this room.”
Hero Prompt as the Primary Focus
The large input field keeps the prompt as the primary focus, so users can start their main workflow instantly without extra steps.
Contextual Quick Actions for Faster Starts
Quick actions below the prompt are tailored to the most common tasks and updated based on users’ recent prompts, so users can jump into specific workflows instantly, reduce friction, and improve task efficiency.
Loading State
Spalce shows what the AI assistant is analyzing in real time, so users know what’s happening and what to expect.
Give Users Full Control over data sharing
Spalce clearly explains AI data sharing and gives users control with options like “Always share” and “Not now.”
Multimodal Input
Spalce supports text, voice, and live conversation so users can choose the fastest, most comfortable way to interact in different situations.
Create a Style
Style Creator helps lock in style preferences or align couples on a shared design language.
Transparency Disclaimer Builds Trust
A brief disclaimer sets clear expectations that AI suggestions may be imperfect, encouraging users to double-check key details.
Synthesizing the affinity map revealed 4 key insights:
Competitive Analysis
I conducted a competitive analysis to understand where existing tools fall short and where Spalce could offer something meaningfully different. The matrix below summarizes the most relevant feature gaps and opportunity areas.
Together, these gaps revealed 4 opportunities that guided Spalce’s product direction:
Enable true-scale MR walkthroughs that let users walk through layouts at full scale, using their own bodies to judge space, distance, and fit before purchase.
Combine AI layout suggestions, hands-on spatial editing, and full-scale walkthroughs into one continuous flow, so users can move from exploration → adjustment → validation without switching tools.
Allow users to place and compare furniture from multiple brands in the same room, instead of being limited to a single brand or retailer’s catalog.
Help users see how their existing furniture and new pieces from different brands work together in the same space before buying anything new.
In this phase, I synthesized research findings into clear jobs, constraints, and decision points that shaped early ideation.
Rather than jumping directly to features, I focused on what users were actually trying to decide and validate before moving in or buying new furniture. To ground these decisions, I used the Jobs-to-Be-Done (JTBD) framework to stay centered on users’ real goals instead of surface-level feature requests.
For the MVP, I prioritized the jobs most closely tied to pre-purchase decision-making and spatial confidence, while treating shared visualization as a secondary but important support need.
These flows helped me clarify:
what users do step by step,
when AI support is helpful,
and where users need direct spatial feedback to make decisions.
Rather than designing isolated features, I focused on creating end-to-end flows that move users from exploration → adjustment → validation without switching tools or contexts.
I created early spatial sketches to reason through how the system would understand and represent the real environment at room scale.
These sketches explored:
scanning the room in real time,
identifying and classifying objects (furniture, decor, moving boxes),
and visualizing boundaries, dimensions, and zones users can interact with.
This process helped me align system perception with how users already think about their space—what feels movable (such as sofas, tables, and chairs), what feels fixed (such as walls, windows, and doors), and what should or shouldn’t be included in layout decisions (like moving boxes or temporarily stored items).
These explorations focused on:
selecting and modifying furniture directly in the space,
previewing layout alternatives without losing context,
receiving clear visual cues about lighting, glare, and comfort issues,
and deciding when to accept, adjust, or ignore AI suggestions.
Instead of optimizing for speed or automation, I prioritized interactions that support understanding, comparison, and confidence, especially at moments where users hesitate before committing to a decision.
Translating the full spatial workflow into detailed UI screens to clarify how scanning, environment understanding, multimodal control, and AI guidance come together in the actual product experience.
Refining multimodal interactions by exploring how gaze, gesture, and voice can complement each other to make spatial control smoother, faster, and more intuitive.
Strengthening how the AI adapts over time so recommendations become more aligned with different user preferences, and its reasoning becomes clearer, more contextual, and easier to trust.

































