Bentley’s most popular reality modeling applications
The Reality Modeling WorkSuite is a sales bundle that offers you access to Bentley’s most popular reality modeling applications, iTwin Capture Modeler and Reality Data Management. Save time with Bentley’s solutions and continually produce high-fidelity 3D reality models.
What is the Reality Modeling WorkSuite?
The Reality Modeling WorkSuite offers you an end-to-end solution for adding digital context to your projects. Bentley’s reality modeling software can handle any size, and from many sources including point cloud, imagery, textured 3D mesh, and traditional GIS resources. Reality Modeling WorkSuite can integrate and combine all your reality data into one single digital context.
Reality Modeling WorkSuite
- Create, share and consume with one solution
- Generate engineering-ready geometrical models
- Produce 3D meshes with high quality texture
- Automate 3D mesh generation workflow
*Prices vary per region. For more options, see licensing and subscriptions section.
— Hui Ying Teo, Senior Principal Surveyor, Singapore Land Authority
FEATURED USER STORIES
Reality Modeling Software Resources
Frequently Asked Questions
Yes. iTwin Capture Modeler is the most versatile solution on the market, and automatically extracts details from any resolution photos. You can properly register using the control points in your datasets.
The most popular fisheye cameras (GoPro, DJI…) are supported in our camera database. 360 cameras are not supported 3D reconstruction of process. They usually result from a merge of planar images that can be used as individual inputs to serve 3D reconstruction.
Yes. Most of the camera manufacturers’ RAW formats are supported, but currently only the 8-bit channel is used.
Yes. iTwin Capture Modeler accepts videos as an input in MP4/WMV/AVI/MOV/MPEG formats, and automatically extracts a frame according to a user-defined period.
Mask creation or color equalization can be executed prior iTwin Capture Modeler processing without any problem. But all geometric alteration (crop, rotate, distort…) of images is prohibited as it will cause failures.
Yes. You can import an OPT file, or add a camera to your database and input its specific calibration parameters, such as distortion parameters, principal point, and focal length.
We recommend that every part of the scene is captured in at least 3 neighboring photographs. The overlap must then be more than 60% in all directions.
We recommend using a camera with a reasonable sensor size (in mm) with a fixed focal length. Different camera properties in a single project are well handled by iTWin Capture Modeler but it is recommended to remain as much as possible in same “camera conditions” all along the capture to ensure a robust adjustment of all parameters.
Ground Control Points (GCP) are not mandatory but highly recommended to accurately georeference the model, correct drift on corridor acquisitions, increase precision on the altitude, register different datasets in the same project, assess precision of aerotriangulation in formal reports.
It is always better to have a complete and accurate geotagged dataset. But in case information is missing, iTwin Capture Modeler can still process imagery alignment and ensure geo-registration based on what is available. The quality of geo-registration will be linked to the accuracy of available geotags.
No information of this kind is required. However, if images are listed in an external columned file onboarding 3D-coordinates, it will help and accelerate imagery alignment.
We offer a comprehensive acquisition guide that describes all the best practices to acquire photos for a specific purpose. It is possible to acquire verticals through a first flight, and then acquire oblique and add both to the same project.
Not for now. We believe that our job is to provide the best processing software, and we cooperate with major UAV manufacturers who already have their own mission planning solutions.
The software processes all images regardless of orientation, including oblique imagery, in the same way.
There are several ways to scale and geo-register a model: through geotags embedded in the photos, ground control points, or by adding manual tie points in the photos and defining a scale constraint.
Static parts will be reconstructed. Moving parts (cars, people…) will not iTwin Capture Modeler proposes bounding box feature to avoid undesired background reconstruction.
iTwin Capture Modeler on-boards a touch-up module allowing basic clen-up operation (hole-filing, floating parts removal, surface flattening…). It also allows OBJ-format export that will be supported by all 3D-edition tools. Afterr being touched-up in third party, reality mesh can be imported back in iTwin Capture modeler for update.
Reality meshes in 3Dtiles format can be manipulated, classified, and annotated in MicroStation, and Bentley iTwin Capture Web Viewer.
The global accuracy is about 1-2 pixels (resolution=projected size of a pixel on the scene, also called Ground Sampling Distance for aerial acquisitions) in a plane perpendicular to the acquisition, and 1-3 pixels along the main acquisition direction.
Yes, absolute accuracy of the reality mesh will increase if input data (camera positions) are accurately defined.
Every acquisition process may benefit from a specific camera system. However, all cameras, from a mere smartphone to highly specialized aerial multidirectional camera systems, are supported by iTwin Capture Modeler. What is important is the capture pattern, the resolution (projected size of pixels on the scene), the sharpness of the photos, and a fixed focal length.
Definitely! This will help you to enliven any captured context in minutes. 3SM will be a more optimized format for such a purpose.
Urban area models generated by iTwin Capture Modeler are assimilated to LOD3 CityGML but do not come with any semantics.
Georeferenced 3D models can be overlaid with laser scans. This will provide users to get the best of both worlds, either to complete a LiDAR acquisition with photos, or to extract more details and increase precision for the same area.
iTwin Capture Modeler produces multiresolution meshes in KML format, which can be directly loaded into Google Earth.
OpenCities Map, Esri’s ArcGIS, SuperMap, and more generally, any 3D GIS or visualization software compatible with a multiresolution-tiled format (OpenCities Planner, Unity 3D, OpenSceneGraph, Eternix’ BlazeTerra, etc.).
Web-ready 3SM and Cesium 3D-Tiles will be compatible with iTwin JS web viewer.
All V8i SS4, CONNECT and DGNDB platform compatible products will support the 3SM and 3MX format. Descartes, Map, ABD, OpenRoads ConceptStation, etc.
iTwin Capture Modeler can export an STL or OBJ format, widely accepted by 3D printers.
This is about 150MB for a similar project. 100 times lighter than a colored LAS point cloud and 22 times lighter than POD format.
Yes. iTwin Capture Modeler can produce models in LAS, LAZ, OPC and POD point cloud formats.
For communication purposes, LumenRT will do the job quite nicely. For more technical analysis, OpenFlow brand will be optimized.
MicroStation loads the Spatial Reference System (SRS) used to produce the 3SM file and references the model accordingly.
In iTwin Capture Modeler , there is an “Adjust photos onto Pointcloud” feature that automatically runs alignment before 3D-reconstruction.
Yes, spatial reference system is selected at production stage by the user in a dedicated library.
Bentley iTwin Reality Data Viewer is the most suited path to share with stakeholders. It will display 3DTiles hosted on Reality Data Management, allowing photo-navigation, annotation, and permission management.
Clash detection can be done in MicroStation (on extracts of the mesh), or on point clouds in various solutions but not in a viewer, either web or desktop.
This is fully automatic as far as the input datasets are suitable (overlap, sharpness, optical properties, etc.).
Yes. There is a report at the end of the AT, containing the various RMS values as well as processing parameters. Quality metrics are also viewable in the 3DView.
The production time is considered as clock time. The average observed production speed, on suited workstation, is about 20 Gpix per Engine per day.
Either through control points or geotags with the photos.
Through a dedicated UI in the software. You can also load them through a text file and then identify them in your photos.
Either by georeferencing the model, or by adding manual tie points with a distance constraint in the editor.
Yes, Bentley Descartes or iTwin Capture Manage & Extract are dedicated to this type of application.
iTwin Capture Modeler can perform such operations. Feature extraction relies on AI-trained models available on dedicated Communities page. Ground extraction is one of them.
Yes, using the 3D viewer which is included in iTwin Capture Modeler. Users can measure coordinates, distances, height differences, areas, and volumes. In the web viewer, only coordinates, distances, and height differences can be measured.
Volumes are calculated by refencing either a mean plane created through the georeferenced selection polygon or to a custom plane at a specific height.
No. This is what makes iTwin Capture Modeler so unique. City or bridge models can easily reach dozens of Gb on a hard disk and be streamed through a web or local server, thanks to the multiresolution architecture and the optimization of the mesh.
Photo acquisition! The reconstruction process is truly straight forward when the photos are appropriate: resolution, sharpness, overlap.
The software applies to any photo dataset, aerial, ground, outdoor, indoor, so long as the objects in the scene are static (if moving too much they will be automatically removed). The best practice for shooting photos indoors, is to walk sideways, back to a wall and shoot photos in multiple directions to the front (slightly upwards, downwards, rightwards, leftwards). The acquisition guide provides more information on this procedure.