As our world becomes increasingly digitized, so does our ability to connect with it. Imagine if you could search your surroundings the same way you search the web. Augmented reality(AR) is one such technology which enables us to interact with the real environment. The recent Apple keynote gave us a real sneak peek into how common augmented reality would be in the future. 10years back Apple showed us a device, which revolutionised the way how we interact with technology. And now, 10 years later, there is a high probability for AR to bring on a new revolution on how we interact with the digital world. Some of the readers of this article might have a question in their minds, “what is Augmented reality?“ Augmented reality is the real-time integration of digital information with the user’s environment. Unlike virtual reality, which creates a totally artificial environment, augmented reality uses the existing environment and overlays additional information on top of it. An explanation of augmented reality cannot be complete mentioning the reality virtuality continuum. Figure 1 represents the reality virtuality continuum as proposed by Milgram et al.(1995). It defines augmented reality in a space between a completely modeled and an un-modeled world i.e. in between a real and virtual environment.
Figure 1. Reality Virtuality Continuum (Source: Milgram, Takemura, Utsumi, & Kishino, 1995)
Is apple the first to bring AR to consumer devices? the answer would be NO. Footprints of research on AR can be seen in the blogs of the tech giants as well as in academic journals for some time now. And the Apple ARkit is the latest entry into this domain. In 2013, Google introduced its Google glass to the developer community with a mission of producing a ubiquitous computer. It displayed information in a smartphone-like hands-free format where the users communicated with the Internet via natural language voice commands. Although it was discontinued in 2015, it showed us a new way how we interact with digital content. Meanwhile Google had introduced Project Tango, a smart device with embedded depth cameras for AR application in 2014 , which has extended to commercial AR smartphones Lenovo Phab 2 pro and Asus Zenphone AR. Another notable Augmented Reality kit in the consumer market now is the Microsoft Hololens. It is one of the most advanced and most common platform on which most AR enthusiasts are working on. It has a combination of depth cameras and gyro sensors to track the motion of the wearer and place the virtual components/holograms at a precise location.
Now that we understand “what is AR” and what are the most common consumer devices equipped with AR are, let us get into the focus of this article. What does AR mean to the Construction IT community? For that, let us go forward in time…
Let’s assume you are a site engineer and you walk into the construction site at the start of the day. You don’t have any drawings or tablet computers with models in it to aid you. All you have is a smart glass. You interact with the virtual assistant in the smart glass using your voice and gestures. You ask the virtual assistant. “Show me the activities planned for today”. You don’t see any Gant charts or work breakdown diagrams. Instead you see 4D Building Information Model over laid on the actual construction site. You can see the virtual crane lifting components and new structures rising exactly where it should be on the partially built structure. Suddenly you get a warning, the planned construction sequence is in a conflict with the way the structure is built now and the Artificial Intelligence(AI) element of virtual assistant has detected it. Reason being, there is a deviation in the way structure is built/temporary structure is placed from the way it was planned. The device, with its context awareness (understanding of the real world) could detect this conflict easily. “Find me the shortest way to the conflict location”, you ask the virtual assistant. You see a navigation map overlaid in front of you which takes you to the precise location. While you walk to the location, you see different construction components and equipment. Whenever you gaze at an equipment or a component, necessary information regarding that equipment/component pops up and is displayed near it. Similarly, construction areas that are lagging behind the schedules has a red overlay while those which are ahead of the schedule has a green overlay. While you climb the stairs to reach the location of conflict, you see that enough safety precautions have not been put up for an area which might be a fall hazard. You mark the area using gesture and ask the virtual assistant to raise a safety issue. Immediately, safety issue is marked on a building information model in the server and push notification is sent to the safety manager assigned for this area. Finally, you reach the area of conflict identified by the virtual assistant. You can see that there is a workspace conflict and the virtual assistant has already marked up the area using distinct colours and overlays so that you can identify the issue easily. You ask the virtual assistant for any possible solution, and a second later you get a revised planned for the day with no conflict. You approve the plan and the update is sent to all related people automatically.
Now this seems like a scene taken out of a sci-fi movie. However, this might be how you interact with building information model in the near future. The advancements in the processing power of mobile computing devices, cloud computing and the cutting-edge developments in augmented reality and artificial intelligence paves way for a context aware interaction with the construction information. It is known that we are all trying to embed more and more information into a building model, such as geometry, the material properties, construction sequence etc. With every addition of information, value of the model increases, thus increasing it’s potential to do a lot of tasks automatically. However, with the way BIM is currently used, significant portion of this potential is left untouched. The reason being, BIM is accessed using desktop and laptop computers. Building Information Model is a digital twin of the actual structure, embedded with all the possible information that you have about the structure. Yet, it sits inside a computer screen. Imagine, if all those information is over laid on top of the actual structure. All the interactions with the digital content would be intuitive, you could benefit from a lot of intelligent services as the information model gets another dimension of information, i.e. the real-world situation, while you overlay the digital model on to the real world. This information can be used for intelligent operations such as progress monitoring, as-built modeling, safety inspections etc. Before going into these, let us see how augmented reality works.
Augmented reality devices are equipped with 3D cameras or sensors to map the 3D environment in your view. Once the device maps the 3D information, the building information model is aligned with the 3D map also called reality mesh. This process is called registration. What registration does is align a virtual camera to the building information model, so as you move your head (assuming you are using a head mounted display) the camera replicates the movement and the output of camera i.e. the portion of Building information model changes. What is important here is that, in addition to the modelled information, we are getting data with regards to the actual 3D environment around you. This extra information can be further added to the BIM model to provide a lot of intelligent services. These services are in addition to the overlay of BIM model on the real world that you saw initially. Some of these services include: –
Quality inspection: The 3D mesh of the real world enables you to check whether the structure has been built as per the design. You need not use traditional surveying techniques to check whether all the dimensions of the built structure are adhering to the design, the computer can do that automatically.
Progress Monitoring: As planned model can be compared with the 3D mesh created and determine how much work has progressed (https://www.ice.org.uk/news-and-insight/the-civil-engineer/may-2017/mixed-reality-a-new-way-to-look-at-infrastructure).
Safety checks: The 3D mesh can be used to perform safety inspections to identify potential hazards related to spatial configurations.
In-site navigation: The BIM model and the current location of the user can be used to create path guidance to the required points in the site.
4D collision warning: Construction sequences can be simulated in the real world to identify possible collision and safety warning in advance.
Identify hidden utilities: The precise super imposition of BIM on actual environment enables you to project hidden utilities already modeled in BIM onto the real world. Example: underground utilities, electrical services behind the wall etc.
Complex assembly guidance: This technology understands the environment and helps the user to perform complex assemblies by simulating the same in real time and in real space.
These are some of the many intelligent services BIM systems can offer by achieving context awareness as offered by augmented reality. Academic as well as industrial research communities have conducted several research in this area and have come up with augmented reality supported solutions for multiple problems in construction(Meža et al. 2015; Wang et al. 2013; Behzadan et al. 2015).
Having said that, the technology has not matured to reach a stage that is ready to offer these services now. Researchers are still trying to solve many loose ends. Some of the problems are –
Automated marker less BIM to real world alignment: Automated marker less registration of BIM models to physical reality is an essential task for spreading the use of segmented reality. However, now it is a heavily resource oriented task and we haven’t achieved it in a fully automated manner. We can do it in a semi-automated or a marker based method.
Enormous size of BIM models: BIM objects are known for its large size due to the embedded information. This makes it difficult to embed them in a virtual 3D environment with all the information in it. Researchers are trying to address these by discretizing the model and decentralizing the information. However, this is still in a development stage.
BIM integration: Most of the AR applications now do not offer full BIM integration. They show the 3D overlay with some specific and hard coded tasks making it difficult for the user to switch between applications to access all the tasks. Not all the information in the BIM models are available during AR visualization.
Dealing with high glare situation: The current AR systems have problems working with when there is a high intensity of light. This makes the 3D mesh creation difficult. Also, optimal range of the depth cameras are less than 10m. Hence, the registration is limited to the small volume of real space which is inside the viewing frustum of the depth cameras.
Health and Safety issues: There is a whole debate on how to integrate AR displays safely into the safety gear of a construction person without distracting him from performing his tasks safely and efficiently.
These are some of the challenges that we must overcome before this technology is implemented in the industry. But that doesn’t hinder its possibilities. The intelligence that can be embedded into this technology and the levels of automation that we can achieve using it is limited only by our imagination and creativity. With main stream mobile manufacturers coming into the AR space, we can see that most of the limitations that we face now (especially hardware limitations) will start to vanish in the coming years. This is why we, as a construction IT community should prepare ourselves for this change and be ready to embrace these technologies, take ownership and embed them in our roles for a productive and efficient construction practice.
Behzadan, A.H., Dong, S. & Kamat, V.R., 2015. Augmented reality visualization: A review of civil infrastructure system applications. Advanced Engineering Informatics, 29(2), pp.252–267.
Meža, S., Turk, Ž. & Dolenc, M., 2015. Measuring the potential of augmented reality in civil engineering. Advances in Engineering Software, 90.
Wang, X. et al., 2013. Augmented Reality in built environment: Classification and implications for future research. Automation in Construction, 32(April 2016), pp.1–13. Available at: http://www.sciencedirect.com/science/article/pii/S0926580512002166.
Ranjith K. Soman is a PhD candidate with Imperial College London’s “Centre for Systems Engineering and Innovation”. He is an Active member at India BIM Association. His research focuses on developing an integrated framework for interactive visualization and automated data acquisition for efficient construction progress monitoring. The research aims at studying the potential of interactive visualization using augmented reality in aiding the construction progress monitoring practices followed in the industry. This research is in collaboration with Bentley Systems. Ranjith has a multidisciplinary background with strong experiences in Civil engineering, construction management, Metro rail construction, automation and robotics in construction, virtual reality/augmented reality, building information modeling, wireless sensor networks and automated construction progress monitoring.
Prior to joining Imperial College, Ranjith obtained his B. Tech (Hons) in Civil Engineering from the University of Calicut and his MS (By Research) in Building Technology and Construction Management from the Indian Institute of Technology Madras. Ranjith’s areas of interests are Automation and Robotics in Construction, IT Applications in Construction Management, Building information modeling, Virtual/Augmented reality in construction, System Identification, Information Theory and Sensor assisted simulations. Feel free to contact him through his email: email@example.com