Turbulent flow height measurement with stereo vision

*Corresponding author: ajdinjasarevic1995@gmail.com © The Author 2021. Published by ARDA. Abstract This paper describes the process of 3D analysis of two water currents with method of photogrammetry. Photogrammetry is used in fields such as architecture, engineering, police investigation, preserving cultural heritage, military and geology. This method can be used in military to reconstruct a site with traces of shrapnel or various projectiles. In our case we tried to measure height of turbulent flow, where two currents collided at the angle of 90°. In first section we introduce our problem and method. Second section describes method of photogrammetry and basics of torrential flows. In third section we describe our experiment. Fourth section describes the course of getting 3D model. In fifth section we analyze results and in sixth section, a conclusion is given.


Introduction
With the existence of several types of measurement technologies and their increasing availability, approaches to measuring various phenomena in the living environment are also changing. Knowing the situation at the confluence, both on watercourses (riverbeds) and on many facilities and water infrastructure (fish paths) is of great importance for users and planners of works in this environment. At the confluence of two free-surface currents with higher Reynolds and Froud numbers, quasi-stationary standing waves appear, the flow dynamics of which are very pronounced. In the past, these properties have been measured with tactile or pressure sensors, but they have limited temporal and spatial resolution. In paper, we determined the height of such a stream with the help of photogrammetry. At a measuring station with two perpendicular currents, we recorded a quasi-standing wave with two high-speed cameras, and with the help of images we made a 3D model of the wave after their processing and determined its height. We found that the results were not accurate enough to determine the wave height or a more detailed analysis of the topographic structure, and that more accurate results would require a larger number of cameras to contribute to more images from several different angles.

Torrential flow and photogrammetry
In this section we will describe basics of torrential flow and photogrammetry.

Torrential flow
In free-surface flow, different flow regimes occur (subcritical, critical and supercritical). These regimes can transition from one to the other on shorter sections. At the available cross-sectional energy and at a given flow Q, a critical depth or one of the two conjugate depths occurs (at subcritical or supercritical currents). At low velocities and great depths, potential energy predominates. Such a flow is called a steady or subcritical flow. At high speeds and shallow depths, kinetic energy predominates. Such a flow is called a supercritical or torrential flow. It is characteristic of a subcritical current that the water moves slower and the current conditions are more affected by the downstream conditions, while in the case of a supercritical flow, the conditions are more affected by upstream conditions. At the transition from supercritical to subcritical flow and vice versa, a strongly turbulent flow occurs, which we call a water jump. There are marked losses of kinetic energy in the water jump. The first equation describes the depth for subcritical flow and the second for supercritical flow [1], [ The following equation states the condition for the occurrence of critical quantities of water flow, by means of which the critical depth can be calculated [1]: In equations, ℎ represents depth at subcritical flow, ℎ is depth at supercritical flow, stands for velocity at subcritical flow, represents velocity at supercritical flow and stands for surface width at critical flow. The free-surface flow regime is described by the dimensionless Froud number (Fr). The Froud number allows to describe the ratio of inertia and gravity forces, and is used to describe and model the free-surface flow caused by the component of the force of gravity in the downstream direction. We usually neglect the influence of viscous, surface and capillary stresses, if the models are not miniature, but if we take into account geometric similarities and similarities of boundary conditions, we can also achieve similarity of friction forces [2], [3].
In equation (4) is the travel velocity of the gravitational wave, is the characteristic flow length and is the water flow velocity. The larger the Froud number, the smaller the influence of gravitational forces and the greater the influence of current kinetics, and conversely, the smaller the Fr, the greater the influence of gravitational forces [3]. Froud number gives information about the velocity of the flow in the flow, so it can be used as a measure of the water flow regime [2]. According to the value of Fr, the flow is divided into: -Fr < 1 (subcritical flow), -Fr > 1 (supercritical flow). In this research we are dealing mainly with 3D analysis of supercritical flow.

Photogrammetry
Photogrammetry is a method of obtaining reliable information about the properties of surfaces and objects without physical contact with objects and measuring and interpreting this information. In addition to processing and interpreting photographs, photogrammetry today also includes the processing of videos, images from laser scanners, X-ray images, magnetic resonance images, IR images and various devices such as sonar, radar, etc. The main purpose of photogrammetry in all the mentioned applications is to obtain 3D data from 2D photos or videos. Many analog, tactile, non-tactile, ultrasonic and optical methods and, above all, analytical techniques are used for this purpose. With the development of computing and with increasing processing power, the boundaries of application and accuracy of photogrammetry are moving towards increasingly realistic representations [4], [5].

Experiment
A measuring station was made at the Faculty of Civil Engineering and Geodesy in Ljubljana years ago, which replicates the conditions at the confluence of two flows in the natural environment, and is comparable to previous researches. The water comes to station from two channels, the main and the side channel. Main stream is a total of 6 m long and 0.5 m wide. Side channel is also 0.5 m wide and 1 m long. The confluence of two streams therefore occurs 1 m after the water enters the side stream and this length was chosen in order to stabilize the flow at the entrance and to achieve the same conditions on both lines before the confluence. The conditions at the test site with dimensions in mm are shown in figure 1.

High speed cameras
Two Photron high-speed cameras were used to control the number of frames per second (fps), the aperture time and the moment of photo capture, all parameters were selected in Photron's computer software. We also synchronized the cameras so that we could set the same parameters for both cameras. We used the following cameras: Photron Mini UX100 800K black-white and Photron Mini UX100 800K colour. Both cameras enable the mounting of lenses according to C and F standards and the mounting of the housing on standard three-axis photo heads with a DIN933 ¼ '' -20 UNC inch screw. The cameras have 16 GB of internal memory, which is enough to record a few seconds depending on the set number of frames per second, resolution, etc. The recordings were then transferred to a computer in .bmp format via an Ethernet port with a UTP cable. The synchronization and trigger ports were connected to each other with BNC cables. We had to move the cameras so far away that we captured what was happening at the confluence and a marker that we placed above the confluence so that the edge of the stand could not be seen in the photo. Figure 3 shows a view of the placement of the cameras from the direction of the side stream and opposite the side stream. The original plan of experiment was to set up five speed cameras, with the help of which we could record the events at the confluence from several angles and produce a more accurate 3D model. In the end, we managed to get two speed cameras, which we placed on a stand made for this purpose. It was important that the cameras were at rest and that the vibrations at the confluence and on the testing facility did not affect the quality of the photograph. We placed the cameras above the side channel and thus directly recorded the events in the confluence from two angles. The camera was positioned so that in the center of the photo there was an event on the water surface of the confluence of two flows and a marker above them, so that we shifted them by 60 cm, on an imaginary circle the angular distance between them was 30°, as shown in figure 4. There were several influences on the camera placement positions. Since we had a wave that was high, we had to position them so that we also caught the top of the wave so that we could measure the area. In addition to wanting to capture the top of the wave, we also had to capture the bottom, but we had to be careful not to pull the camera too far away from the confluence, as we would lose a lot of information about the topography. The position of the edges of the testing facility also influenced the position of the camera, as the edges made it impossible for us to capture some shots at the confluence. The angle of 30° was chosen because the Reality Capture software, in which we performed the bulk of the reconstruction of photos into 3D models, recommends an angle between 15° and 30° depending on how many cameras are being used. Since we only used two cameras, we chose the largest possible angle to try to capture as much of the action as possible. If we chose a smaller angle, we would not get much more information than if we recorded with a single camera, but if we chose a larger angle, the program would not detect common points.

Lenses
We used two wide-angle lenses, as we wanted to capture the widest possible viewing angle. We used a lens with a focal length of 18 mm. As a result, we captured the glass edges of the measuring station, which we then cut off in the Adobe Photoshop software environment. The following lenses were used: -AF-S NIKKOR 18-200 mm 3.5-5.6 GED, -AF-S NIKKOR 18-55 mm 3.5-5.6 GIIED.

Lightning
Due to the camera settings, which required a lot of light on the sensor, we installed nine different light sources. The events were exposed to the sensor for a very short time, as we took thousands of shots in one second. Therefore, adequate lighting of the recorded object is crucial. In addition, we had to illuminate strongly so that we could set the aperture speed to a low value due to the better sharpness of the recorded events. The light sources were positioned so that all the light beams were directed as far as possible towards the center of the confluence of flows, as shown in figure 5. If the light were not directed evenly, one part of the testing facility or the confluence would be illuminated better than the second part, which would help to ensure that the edges of the confluence were not correctly detected when making the 3D model. Light sources were installed above the main and side channel, above the confluence, above the drain channel and five more halogen lamps, which were installed either directly on the L profile (three lamps) above the water surface or with a transverse profile across the channel (two lamps). Four LED lamps and five halogen lamps are installed, and the lamp on the diagram, marked with a cross in a circle, is an LED lamp mounted directly above the confluence.

Main channel
Side channel Transverse profile above the channel

Experiment scenarios
To perform the experiment, we set eight different scenarios. The scenarios differed from each other in the set flow on both channels and in the height of opening at inlets. We used three different channel opening heights and eight different flow settings, all in supercritical mode. To determine the Froud number, we had to calculate the velocities and characteristic dimension for the experiment. The velocity was determined by the equation below. We have shown an example of calculating the Froud number in the main channel for the first scenario, for the other scenarios in the main channel and in the side channel the procedure is identical. We first had to calculate the water velocity in the canal.
Here , is the water flow in the main channel in , is the opening height and is the width of the main channel. We then determined the characteristic length , which in our case is the height of the channel opening. Using this data, we were able to calculate the Freud number for the first scenario in the main channel. The procedure is shown in the equation below.
All scenarios with calculated velocities and Froud numbers are shown in table 1.

Making of 3D model
In this section, we describe the process of obtaining the results of our experiment. It describes how we processed the photos, made and processed the mesh, made a 3D model of the water surface at the confluence and finally measured the flow height.
To create a 3D model, we used several different software tools, with which we finally came to the desired 3D model: -Photron FASTCAM Viewer: We used this software to capture photos and transfer photos in bmp format to a computer.
-Adobe Photoshop: A software tool that allows us to convert photos to jpg format and process photos, as mentioned in chapter 4.1, which serves to change the color photo to black and white and automate this process for all photos in the series.
-Reality Capture: The central programming environment, with the help of which, as described in chapter 4.2, we created a 3D mesh in the format of obj from photographs and initially cut and processed the 3D mesh.
-MeshLab: A software environment that reduces the number of mesh elements without much loss of topography information, as described in section 4.3. In the end, we got a 3D mesh in the form of a stl format.
-Autodesk Fusion 360: Using this program, we created a 3D model in iges format from the 3D grid, as described in chapter 4.4.
-NX Siemens: A modelling software who was then able to manipulate the 3D model, measure the height as described in section 4.5, and place it in a 3D sketch of the measuring line for easier representation. Figure 6 shows two photos, recorded for scenario 2. One photo is in color and other one is in black and white.  Therefore, we decided to remove the markers from the photo and crop the photos as much as possible in such a way that the largest share of the photo is happening at the confluence of flows. In order for Reality Capture to better recognize that we want to create a 3D model of the water surface, we have further brightened the water, which has also helped to better detect the edges. We did this in the Adobe Photoshop software, where we processed all the photos by lightening the flow and darkening the background as much as possible to best separate the edges of the water surface from the background. The differences between the photographs before processing and after processing are shown in figure 8.

Mesh creation
Based on the reasons presented above, it was decided to remove the markers from the image and undertake the 3D reconstruction only with photographs of the water. So we first cropped all the photos from both cameras to the same resolution, using Adobe Photoshop again, and using the Batch feature, we automated this process for all the photos. The photos thus processed were then inserted into the Reality Capture program. Immediately after importation, the program itself detected somewhere between 6,000 and 7,000 total points in the photos. Since we already had enough common points in the photos, we chose the Align images function in the Reality Capture software, which created a cloud of common points from the photos, as shown in figure 9. After creating a point cloud, we were able to start making a 3D mesh. We did this by selecting the Normal detail function in the Reality Capture software environment, which means that we used medium-precision 3D mesh fabrication. We did not choose higher accuracy because this would greatly slow down the reconstruction process and make it impossible to produce a 3D model due to the large number of elements. So we got the first coarse3D mesh. Using the Texture function in the same software, we also added a texture to the grid to make it easier to process. The coarse mesh is shown in figure 10. We had to process the coarse mesh by trimming anything that didn't represent water, such as the walls, the light reflections from the wall, and various very small droplets that weren't part of the reconstruction. We first trimmed the model so that we only got water at the confluence, then with the help of the Close holes function in the Reality Capture software we closed all the holes in the model that would hinder the production of the 3D model. The Smoothing tool function, which is also part of the Reality Capture software, smoothed our surface a bit, thus achieving an even smaller number of elements. Finally, we checked to see if we had any more bugs on the model, using the Check topology feature in the same software environment. In the end, we got a trimmed mesh.

Processing the mesh
The model was then exported in the form of obj. From the obj format, we wanted to convert the reconstructed object into a 3D model, which can be imported into the modeler and manipulated (we add other elements, measure the flow height, determine the topographic properties…). In order to do this, we had to reduce the number of elements. We had to do this so that by reducing the elements, the key topographic properties of the model are not lost, but that we still get below 10,000 elements. The network was processed using the MeshLab program and the Quadratic Edge Collapse Decimation function. The original mesh had 18,189 elements. Since the program (Autodesk Fusion 360) for converting a mesh to a 3D model is limited by a finite number of elements, which is a maximum of 10,000, we decided to reduce the number of elements twice, and in marked places where reducing the number of elements will not lose key topography information. The function of the MeshLab software, called Preserve topology, which enables the preservation of topographic properties despite the reduction in the number of elements, also helped us to preserve the topography of the model. It is important to reduce the number of elements so that not too much information is lost, as shown in figure 11. Figure 11. Mesh with less elements (left) and mesh with more elements (right) [6] In the end, we created a mesh with 9,094 elements and saved the mesh in .stl format, which allows the creation of a 3D model, but does not allow the preservation of the textural properties of the model contained in an .obj file. Textures were no longer needed at this stage, as we only needed them in the initial process to be able to separate the water at the confluence from the other elements. Figure 12 shows the difference between the mesh before processing in MeshLab and the mesh after processing. We notice that the differences in topography are smaller despite the two reductions of the elements, and we can also notice that the mesh has no texture after processing. Figure 12. Mesh before element reduction (left) and after element and texture reduction (right)

Creation of 3D model
We started creating a 3D model in Autdesk Fusion 360 software. After importing mesh into a software, we chose the Mesh to BRep function, which creates a 3D model from the mesh. Once the program created a 3D model, we exported it in .iges format to obtain a 3D model, as shown in figure 13.  We can see that the model has almost no depth, it is also very roughly trimmed, as shown in figure 15. It doesn't have depth because we only captured with two high-speed cameras, so we captured a very small angle of action. Since it is very difficult to determine the boundaries of the confluence from the surroundings due to the transparency of the water, we cut off a large part of the model, which apparently belonged to the water confluence. This makes it difficult to determine where the bottom of the model is or where the starting point for measuring height is.

Main channel
Side channel 3D model of confluence

Turbulent flow height measurement
After importing the 3D model into the modeler, we were able to measure the wave height. We would have to scale the 3D model first, as we only got the shape, not the real dimensions. In order to be able to scale, we would need a reference height or some reference length measure in the photo in at least one scenario, but we did not perform this measurement, so the heights are given only in some scale, and for further measurements we would need a more accurate reference length measure which could scale our 3D model. An example of height measurement in the Siemens NX software environment is shown in figure 16, shown below.

Discussion
At first glance, we can see that neither the shape nor the height of the flow correspond to the real situation, so we can conclude that the reconstruction failed. This can already be seen in figure 17, which shows a photo we took with a high-speed camera and a 3D model we made. The very rough shape is reconstructed correctly (in 2D shape) but when recording water there are many factors that affect the quality of the shot. The first thing we notice is that the edges and shape are not precise. This happened because the software could not accurately separate the water from the environment around water. This happened because we directed so much light into one part of the confluence (into the part of the water where the dynamics were greatest) that the other part remained virtually unlit. Thus, one part of the confluence is too illuminated, where losing the topographic properties of the current occurred, and the other part is poorly visible. To make the original, coarsest shape, we should direct the light as evenly as possible across the entire water flow. Another problem associated with lighting is that light on water reflects differently at different angles. In other words, one drop or any other water part can be seen differently on photo from one camera than on photo from other camera, which means that making a 3D model was difficult because the program did not find some common droplets because it did not detect them as uniform in both photos due to different lighting. The solution would be a light source that is as evenly directed and scattered across the water surface. We also found that it is possible to increase the brightness of the water quite effectively with software processing, so that such lighting is not required, as it can be replaced in software photo processing.
The third problem is that we didn't capture the same frame with both cameras. This was made impossible for us by the configuration of the testing facility itself, as we were limited by the glass edge of the channel, which made it impossible for us to capture the frames located behind the wall of the channel. We were able to capture this shot with one camera, but not with another. An example of a photograph with both cameras is shown in figure 18, where we notice that a large part of the frame on camera 2 is missing. This could be corrected by changing the flows so that the action would take place in the middle of the channel, where the footage could be captured on both cameras. Figure 18. Frame shot on camera 1 (left) and camera 2 (right) The next problem was the marker. The marker is used so that the program can orientate from where the photos are taken, depending on how far the marker is moved to one or the other side. With the help of a marker, it then makes a reconstruction of the object located around the marker. Because we placed the marker outside the center of events, the program practically made only a 3D model of the marker, and did not detect water at all. For better results and in general for the use of the marker, it would be sensible to place the marker in the center of events, preferably directly in the watercourse of the confluence so that the marker is visible in all photographs, as seen in figure below on right, where 3D reconstruction of cavitation structures was discussed.
We can see that in this case the marker is placed so that it is located in the center of the event or movement of the ultrasound probe. It would also make sense to place a few objects in the confluence that would be visible on all cameras (a small marble that is attached to the confluence with a string, but does not affect flow conditions), differently colored segments of water or testing facility visible on all cameras, and it would also help to install more markers, but it is crucial that they are all visible in all photos. Figure 19. Placing a marker outside the center of events in our experiment (left) and placing a marker in the center of events in the experiment (right) [4] The biggest problem, however, is the number of cameras. Since we used two cameras, we met only the minimum requirements for the number of cameras, so that it is theoretically possible to make a 3D model. The original purpose was to shoot with a high-speed camera, so that we could create more 3D models in a short time (less than half a second), and then create 3D models depending on the time change of the water level. We would need more speed cameras for this purpose. To make our model, however, we would only need as many cameras as possible and not speed cameras, which would be placed in as many places around the confluence as possible, thus capturing events from several angles. It would be important to have as similar a sensor as possible, so it would be ideal to shoot with the same cameras that have the same aperture speed and that are synchronized. For the purpose of capturing topographies at a given moment, it is not necessary to have fast cameras as slow cameras can be used, which are also cheaper. It would also be helpful if all the cameras were producing colored photos as this would give us more information. With two cameras recording a 30° angle of view, representing only 1/12 of the total 360 ° frame, we obtained very little data on the depth and topographic structure of the water surface. It is clear from the 3D model that we lack the depth that we would gain with a larger number of cameras or a larger viewing angle. Given that the recommended distance of the cameras from the center of the event is 15-30°, it would be optimal to install between 12 and 24 cameras, depending on a certain angle of distance between the cameras. It would also be important to install at least one, and preferably five high-speed cameras above the confluence, with which the topography would also be recorded from above. The cameras above the confluence are not essential for determining the flow height, but they are important for making the 3D model as accurate as possible.
It would also make sense to capture photographs from different distances from confluence, i.e. closer for a more accurate capture of topographic details or more remote for better edge detection of the 3D model.

Conclusions
In this article the subject of the method of measuring the height of a turbulent flow at the confluence of two flows with the photogrammetry method was considered. At the test site, the confluence was filmed with two high-speed cameras placed above the side-flow channel. The photographs were then processed accordingly, with the help of the Reality Capture software, a 3D reconstruction of the confluence was made and the wave height was measured. Following was found out:  we found out that it is possible to make a 3D model of the confluence with the method.  It was found out that the method is not accurate if only two high-speed cameras are used, and that the key to the accuracy of the method is to use as many cameras as possible from as many confluence angles as possible.  We showed that the marker should be placed in the center of events for greater accuracy of 3D modeling.  We also found out that in order to measure the wave height at the confluence, it is necessary to have a reference length in the photograph, with help of which we can subsequently determine the exact wave height.  We proposed corrections when performing the experiment.