| Proposal Submission Deadline | May 22, 2026, 11:59 PM PST |
|---|---|
| Announcement | May 31, 2026, 11:59 PM PST |
| Workshop Date | June 3, 2026, CVPR 2026, Denver |
| Selection | 1 final awardee and 4 finalists |
| Eligibility | Current PhD students and postdoctoral researchers |
| Research Gift Fund | Up to USD 30,000 to the institution of the awardee as gift funding. No indirect cost is intended. Final terms are subject to sponsor approval and institutional policies. |
The Rising Star Award for Spatial Intelligence is organized as part of the E2E3D Workshop at CVPR 2026. The award recognizes an early-career researcher with strong research achievements and a clear future vision for spatial intelligence.
The award focuses on work that connects 3D perception, 3D generation, 3D representations, world models, spatial reasoning, embodied AI, XR, and real-time systems. The goal is to highlight a research agenda that helps future models understand, reconstruct, generate, and act in 3D and dynamic environments.
Applicants must be in one of the following roles at the time of submission:
Additional guidelines:
Each applicant should submit two PDF documents. Each document should be at most one page.
Applicants may use any format, but a strong one-page proposal should state the core research question, explain why the problem matters now, outline the technical path, define how progress will be measured, and describe what the community will gain.
Relevant areas include, but are not limited to:
| Area | Example Topics |
|---|---|
| 3D reconstruction | Single-view, multi-view, and video-to-3D reconstruction; RGB-D and LiDAR reconstruction; SLAM, mapping, localization, and geometry from unposed images. |
| 3D generation and editing | Object, scene, human, and asset generation; text-to-3D, image-to-3D, video-to-3D; controllable editing; physically grounded generation. |
| 3D representations | Neural radiance fields, 3D Gaussian splatting, implicit fields, meshes, point clouds, occupancy fields, signed distance fields, and hybrid representations. |
| 4D and world models | Dynamic scenes, temporal consistency, persistent spatial memory, physical prediction, scene simulation, and long-horizon video understanding. |
| Spatial intelligence and embodied AI | 3D vision-language models, vision-language-action models, spatial reasoning, navigation, manipulation, affordance learning, and action-conditioned perception. |
| Spatial computing, XR, and mixed reality | AR, VR, mixed reality, spatial maps, real-time scene understanding, user-facing 3D perception, and interaction in spatial computing systems. |
| Data, evaluation, and systems | Large-scale 3D and video data engines, synthetic data, multi-sensor logs, benchmarks, robustness, latency, memory, energy, and edge deployment. |
Applications will be reviewed by the award committee. Reviewers will consider the career stage of the applicant. Main criteria include:
This call may be updated on the workshop website. Applicants should follow the final instructions posted on the E2E3D website.
Trouble loading the form below? Open it directly: https://forms.gle/2Vm7mtZWKejG85oG6
You will need to be signed in to a Google account to upload PDFs.