Odicon AI: Constrained Generative Placement for Interior Design
How Odicon AI combines depth-aware structure, SKU retrieval, and controlled diffusion to place products with physical fidelity.
Precision is the interface
Interior design renderings fail when they look good at a glance and wrong in practice. Odicon AI is built around one principle: no free-form imagination.
It only generates what the scene, the catalog, and the physics all agree on.
Every result is optimized around three constraints:
- spatial fidelity,
- SKU fidelity,
- and photometric fidelity.
That is the difference between a concept image and a production asset.
1) Geometry first
Odicon separates scene structure from visual identity.
Spatial grounding
- Depth and normal maps are extracted from the input render.
- Occlusion, floor contact, and perspective are made explicit.
- ControlNet uses these maps so objects land where they can exist.
This avoids the classic failure mode: furniture that looks right but does not belong in the room.
2) Identity preserved
The system does not ask the model to invent a product from scratch.
It searches the catalog first, then conditions generation.
- Encode catalog references into a multimodal vector index.
- Retrieve the nearest neighbors for the selected SKU.
- Inject those references as high-confidence conditioning signals.
The result is consistent fabric, shape, and hardware details, even across long sessions and noisy prompts.
3) Seamless integration
Placement is solved in latent space, not as a hard image paste.
- A soft mask defines the insertion zone.
- Gaussian falloff smooths edge transitions.
- Latent blending merges the synthetic object with scene context before decode.
This produces transitions that read as designed, not composited.
4) Lighting as a constraint
Visual realism is treated as an optical constraint.
- Estimated light direction and intensity are propagated into the diffusion process.
- Material response is adjusted per object type.
- Contact shadows and highlights are generated from geometry-aware depth.
The scene keeps its coherence from corners to shadows.
5) Reliable operations
Odicon runs as an asynchronous queue pipeline, split by responsibility:
- Request layer: validation, tenant context, payload assembly.
- Orchestration layer: queueing, retries, and scheduling.
- Compute layer: GPU workers for constrained diffusion jobs.
This gives clean separation and predictable behavior at scale.
- Non-blocking API responses under heavy workloads.
- Deterministic job lifecycle (
queued → processing → completed/failed). - Independent horizontal scaling across worker pools.
- Safe retries without user-facing failures.
Why this matters
Odicon is not a prompt trick. It is a constrained system for reliable scene synthesis.
- geometry first,
- identity second,
- coherence all the way through.
That is why the output is practical: reviewable, repeatable, and ready for decision-making.
Engineering Blueprint (Compact)
Request flow
Designer UI
-> API layer
-> validation + context enrichment
-> job payload creation
-> task queue
-> compute worker
-> constrained generation
-> asset storage + state update
-> client notification (stream/webhook)
Queue payload example
{
"requestId": "od-2026-03-30-a2f9",
"tenantId": "acme-design",
"roomImageId": "img_room_9812",
"skuId": "sku_chair_204",
"placement": {
"zone": [0.41, 0.58, 0.27, 0.36],
"softMaskBlurPx": 24,
"shadowStrength": 0.62
},
"conditioning": {
"spatial": {
"controlNet": true,
"depthScale": 0.9,
"normalScale": 0.8
},
"identity": {
"adapter": "ip-adapter",
"kReference": 6,
"referenceSource": "catalog-embedding-index-v3"
}
},
"output": {
"format": "png",
"width": 1536,
"quality": "high",
"provenance": true
}
}