Written by Venkat Venkatasubramanian, SwiftMotion CEO
One of the pressing needs for many industrial complexes (large and small) is maintaining proper ergonomics among its workforce to prevent injuries. Such injuries often cost a lot of employers in workers comp and other indirect costs due to the employee absence. For jobs and task analysis, the traditional observational methods of employment and observing posture, and calculating risk scores such as RULA, REBA are not in vogue anymore. Some emerging trends let practitioners analyze high-risk jobs and investigate what could have caused such injuries.
Vision-based systems use commercially available cameras or smartphone cameras and building deep learning models that can understand Pose, activities that a subject is performing. The proliferation of such internet-enabled cameras has made safety risk assessments much more manageable. Vision-based systems are easy to deploy and are highly scalable. It is more suited for “bench-work” or where subjects are not moving too far away from the primary camera capturing the activity. This approach also limits the analysis in terms of the need to do a full 3D motion analysis. Nevertheless, the use of a single camera can be sufficient in many task analyses.
Another method that is currently in practice is the use of motion tracking sensors for occupational health risks. The sensors located at each joint provide accurate 3D rotations, which mapped to a full-body animation. The use of motion tracking sensors is getting more prevalent as the dropping costs of such sensing modules and them can get online via BLE, WIFI, and other connectivity modules. We can add different risk models on the time series of data observed when performing tasks. Solutions built on these enable high mobility and are suitable in construction sites where one can find many structures that can form occlusions for cameras to work effectively.
A comparison of the two technologies is summarized here:
|Camera-Based risk assessments
|Sensor-Based risk assessments
|Suited for Low mobility scenarios
|Suited for all mobility scenarios
|Works well in occlusion free environments
|Occlusions not a factor
|Cheaper using IP camera or smartphone-based
|Relatively expensive with upfront hardware costs
|High in 2D capture or based on the position of cameras relative to the subject
|Enables full 3D capture of the subject.
|Joint angles of visible joints
|Kinematics such as velocities, accelerations are also measurable in addition to all joint angles
We see the advantages in both systems and have already developed solutions utilizing both. Both systems provide a giant leap compared to existing observational methods using pen and paper and calculating risks. Our FUZE system is a sensing-based solution appliable in construction sites, warehouses to analyze tasks and perform job redesigning. The VIZE platform is perfect for safety risk assessments applying analytics on camera vision and deep learning models.