Skip to main content

Classified 3D mapping and deep learning-aided signal power estimation architecture for the deployment of wireless communication systems

Abstract

The traditional wireless communication systems deployment models require expensive and time-consuming procedures, including environment selection (rural, urban, and suburban), drive test data collection, and analysis of the raw data. These procedures mainly utilize stochastic and deterministic approaches for signal strength prediction to locate the optimum cellular tower (eNodeB) position for 4G and 5G systems. Since environment selection is limited by urban, suburban, and rural areas, they do not cover complex macro and micro variations, especially buildings and tree canopies having a higher impact on signal fading due to scattering and absorption. Therefore, they usually end up with high prediction errors. This article proposes an efficient architecture for the deployment of communication systems. The proposed method determines the effect of the environment via extracting tree and building properties by using a classified 3D map and You Only Look Once (YOLO) V5, which is one of the most efficient deep learning algorithms. According to the results, the mean average precision (mAP) 0.5% and mAP 0.95% accuracies are obtained as 0.96 and 0.45, and image color classification (ICC) findings indicate 77.6% accuracy on vegetation detection, especially for tree canopies. Thus, the obtained results significantly improved signal strength prediction with a 3.96% Mean Absolute Percentage Error (MAPE) rate, while other empirical models’ prediction errors fall in the range of 6.07–15.26%.

Introduction

Recent advancements in the digital era and intelligent devices have led to a breakthrough in wireless technology. 4G long-term evolution (LTE) and the upcoming 5G cellular technology are the backbones of this digital transformation by offering consistent and high-throughput network communication [1, 2]. However, maintaining the connectivity mainly depends on the accuracy of the empirical signal path loss model, which covers the detailed environment structure such as trees, bushes, branches, leaves, and human-made constructions [3]. Based on empirical signal models, RF engineers design, position, and deploy cellular towers (eNodeBs) over the target field. This process always requires a robust sensor configuration to acquire detailed information about the environment [4]. Light detection and ranging (LiDAR) has been widely used in many engineering applications, including civil engineering, military surveillance, natural resources characterization, and artificial intelligence (AI)-aided autonomous devices [5, 6]. However, LiDAR only system is not capable of georeferencing and mapping the terrain due to the lack of geographical and positioning information. Therefore, engineers use LiDAR, unmanned aerial vehicle (UAV), Global Positioning System (GPS), and inertial measurement unit (IMU) as the primary devices and sensors to obtain the 3D image of the environment [7, 8]. The row LiDAR data are direct-georeferenced by the fusion of GPS and IMU sensors. Every received data point containing GPS location (latitude and longitude) should be time-stamped with pitch, yaw, and roll information through IMU [9]. Nevertheless, 3D mapping is not enough to detect and classify obstacles alone. Computer vision techniques such as color segmentation and deep learning algorithms are also required to extract meaningful features from the environment.

Image color classification (ICC) is a computer vision technique applied to 2D images to extract desired areas from the target location. RGB is one of the most used image formats containing composite channels of red, green, and blue coded in 256 levels (0–255). Thus, combining these three channels can create various types of colors that exist in nature. Since RGB color space is not suitable for digital manipulation to extract trees and other objects from the environment, Lab color space, which L stands for lightness and a and b stand for the color dimension, could be used [10]. The main advantage of using Lab color is to access all colors in the spectrum and some colors beyond human perception. These features are very critical for object detection and deep learning algorithms.

Deep learning is essentially a subset of machine learning where the artificial neural network (ANN) imitates human neural cells. The learning is based on training with a considerable amount of data which is used to reduce the error iteratively [11,12,13]. The term “deep” implies the high number of ANN layers. Thus, as the number of layers increases, the learning will be deeper. There are different types of ANN, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and recurrent convolutional neural networks (R-CNNs). Some object detection systems also systematically use these neural nets to classify and locate objects such as You Only Look Once (YOLO) V5 and Efficient Net. However, all these systems have their pros and cons in terms of performance. The YOLO V5 utilizes CNNs to detect and classify objects for various applications in real-time (higher frame per second), making it more preferable [14, 15].

This research demonstrates an AI-aided multi-sensor fusion architecture to accurately predict wireless communication systems’ signal power path loss (SPPL) by including the micro-variation effect of the environment. Unlike other empirical models, our model will minimize the environmental constraints by extracting features including height and type of the obstacles in the line of sight (LOS) direction from the 3D classified terrain. The proposed method aggregates the effect of the trees and buildings on SPPL using the ICC and YOLO V5 algorithms. According to the results, there is a significant improvement in predicting SPPL for the deployment of wireless communications systems. This new method not only minimizes the error related to environmental complexity but also maintains the connectivity for any type of environment.

The rest of the paper is organized as follows: Sect. 2; Sect. 3; Sect. 4; Sect. 5 ; Sect. 6.

Background work

The continuous growth of cellular communication systems and wireless sensor networks (WSNs) in parallel with increasing demands poses new energy consumption and efficiency challenges. Federal communications commission (FCC) report states that the inadequacy of infrastructure limits the ability to manage services in response to the increasing demand for wireless communications [16, 17]. Adding new cellular towers may seem like an easy solution, but real-estate fees and equipment costs put companies in a tight spot. Therefore, efficient cellular tower deployment is of paramount importance.

To achieve optimum SPPL, several performance criteria can be used, such as sufficient received signal level (RSL), comprehensive coverage, and long distance [18]. The SPPL is usually determined according to the free space, Cost-231 Hata, and log-normal modeling results to increase efficiency. Some researchers also combine LiDAR and path loss models to achieve maximum accuracy. For instance, in their paper, Demetri et al. estimate the radio RSSI readings and behavior of low-power WSN in a forest environment using airborne LiDAR equipment. Their study shows that LiDAR eliminates the necessity of in-field campaigns and fine-grained ones. The proposed signal estimation method reduces the RSSI reading error to ± 6 dBm per link bases [19]. Image processing plays a significant role in many research areas such as medical science, defense, and surveillance, constructions engineering, and path loss analysis [20,21,22,23]. In their study, Thrane et al. aim to find the effect of buildings on multi-path signal propagation path loss [24]. They collect signal attenuation measurements between the transmitter and receiver at different locations. The buildings’ path loss effects are estimated using image classification and deep learning techniques, together with the 2D satellite images and the rotated versions of the images. Their model achieves a 1 dB to 4.7 dB improvement factor in the path loss estimation compared to the empirical models.

Similarly, Klautau et al. utilize LiDAR-based feature extraction and CNN architecture to reduce overhead in mm-Wave beam systems in the line of sight (LOS) direction. They use traffic, raytracing, and LIDAR simulators to simulate an orthogonal frequency division multiplexing (OFDM) mm-Wave downlink channel. The study contains both binary and top-M classifications, and the minimum misclassification error achieved is 24% in the noise-free condition [25]. In another study, Krijestorac et al. utilize deep learning and 3D maps to predict radio signal propagation in an urban environment, and their model outperforms the traditional methods for signal strength estimation [26]. Man-made structures and tree canopies are two factors that reduce signal levels the most due to the scattering and absorption effects. To decrease these effects, aerial-based station research and analysis are being done. When the ground-based cellular towers are damaged, these aerial-based stations might be practical after natural disasters, such as earthquakes, floods, tsunamis, and hurricane [27]. Andreyev and Thiel claim that aggregated system capacity can be increased by 52% using aerial-based stations instead of traditional ground-based cellular towers [28]. According to Alzenad et al., aerial-based stations maximize covered users using the minimum transmit power [29]. Pliatsios et al. also studied on the optimal deployment of drone-based stations by comparing swarm intelligence approaches [30]. Although studies on aerial-based stations are increasing, these stations are unstable due to the high number of uncertainties, such as random weather conditions and unpredictable motion on the air [31]. Thus, an advanced AI-aided SPPL system including direct-georeferencing and sensor fusion is required to obtain sustainable and reliable results.

Materials and methods

Direct georeferencing

Direct georeferencing is the method of finding the location and orientation of the Geodetics Mobile Mapping System (Geo-MMS) with the help of external orientation elements such as elevation, orientation angles (\(\theta ,\eta\)), and distances in the Cartesian coordinate system (X, Y, and Z) [32, 33]. The Geo-MMS systems involving GPS, IMU, and LiDAR are generally mounted on unmanned aerial vehicles (UAVs) for photogrammetry and real-time 3D mapping. The GPS and IMU provide orientation (Pitch, Yaw, and Roll) and position (latitude and longitude) of the UAV on the earth while the downward-directed LiDAR scans the surface with laser pulses. These processes require very precise calibration since all the sensors are working independently [34], as seen in Fig. 1.

Fig. 1
figure 1

Direct-georeferencing on LiDAR and UAV. Where \((X_b, X_L)\), \((Y_b, Y_L)\), and \((Z_b, Z_L)\) are boresight and laser Cartesian coordinates which explain related rotation parameters pitch, yaw, and roll. The \(\theta\) and \(\eta\) represent the angles between ground and target, and the angle between ground and laser’s X direction, respectively

In this type of multi-sensor fusion architecture, the orientation of the architecture is supported by the Kalman filter supported inertial navigation system (INS) [35]. The INS assigns georeference points to each data block received from the IMU and LiDAR through the GPS. A sequential adjustment between INS and LiDAR is required as each sensor operates at different frequencies. After synchronization and direct georeferencing of IMU and LiDAR are completed, georeferenced data points are combined to visualize the 3D point cloud. In this endeavor, the raw point cloud data received at a 70 kHz scan rate with a 1 cm resolution is obtained from Florida International University (FIU) [36]. The survey area and corresponding 3D point data cloud from the Florida Tech neighborhood in Melbourne, Florida, are represented in Fig. 2.

Fig. 2
figure 2

3D point data cloud from the Florida Tech neighborhood in Melbourne a Survey area from Google Maps b 3D point cloud of survey area

Classified 3D mapping is the fusion process of 2D image color classification (ICC) and 3D point cloud. The process begins with the 2D ICC of the survey area which extracts features to accumulate valuable and necessary data parts. One way to apply ICC is to use Lab color space to identify the vegetation from the environment. Although most images are in RGB format, they are converted to Lab images since the RGB format is unsuitable for digital manipulation [37,38,39]. Next, a binary mask will be determined to find the average intensity of each channel falling into the mask for that image. For Lab color format, differences between each channel (\(\Delta L,\Delta a\), \(\Delta b\)) and corresponding masks ( \(\mu _{maskL}, \mu _{maska},\) and \(\mu _{maskb}\)) will be calculated as indicated in Eq.(1).

$$\begin{aligned} \Delta L_{{M \times N}} =\, & L - \mu _{{{\text{maskL}}}} ,\quad \Delta L_{{M \times N}} \\ =\, & a - \mu _{{{\text{maska}}}} ,\quad \Delta L_{{M \times N}} \\ =\, & b - \mu _{{{\text{maskb}}}} \\ \end{aligned}$$
(1)

where L stands for lightness; a stands for the color dimension between red and green; and b stands for the color dimension between blue and yellow.

$$\mu = \left( {\frac{1}{{m \times n}}\sum {(m,n)P_{{{\text{mask}}}} (m,n)} } \right) \times {\text{Ones}}_{{m \times n}} {\text{ }}$$
(2)

The desired area is not represented by only masks. Thus, the Euclidean distance for all three channels will yield the color values closest to the masked portion of the image, as indicated in Eq.(3) [40].

$$\begin{aligned} \Delta E_{M\times N}=\sqrt{(\Delta L)^{2}+(\Delta a)^{2}+(\Delta b)^{2}} \end{aligned}$$
(3)

The \(\Delta\)E must be within the 95% Confidence Interval (CI) since color classification without some tolerance would remove entire color tones belonging to that specific area. If the \(\Delta\)E values are smaller than CI, that value is assigned logic 1, otherwise logic 0. The 3D map of the environment is classified by fusing the 3D point cloud and the 2D classified image. The obtained results are presented in Fig. 3.

Fig. 3
figure 3

3D tree detection process via 2D ICC process a masked image,b 2D color classified image, c 3D color classified image

You Only Look Once (YOLO) V5

The You Only Look Once (YOLO) is one of the cutting-edge object detection algorithms [41]. YOLO outperforms the other traditional object detection algorithms by examining the images only once to find the objects inside them. Among all previous versions, YOLO V5 has a faster processing speed with the most advanced structure using parallel calculations [42, 43]. Many new techniques are used in YOLO V5, such backbone (CSPDarknet), neck (PAnet), head (YOLO layer) [44]. To get more precise accuracy values, YOLO V5 uses a deeper and more complicated ANN, Dense Block [45].

Looking at the architecture in Fig. 4 carefully, we will see that cross-stage partial network (CSPNet) finds a solution for repeated errors related to gradients on a large scale and integrates the difference into a feature map. In other words, CSPnet minimizes the number of used parameters in the model to speed up the process and accuracy [46, 47]. In addition, PANet, the neck section, gathers parameters from various backbone levels instead of the feature pyramid network (FPN). YOLO V5 also uses adaptive feature pooling to transfer features to subnetworks. Finally, in the last stage, the head is used to detect objects using anchor boxes. The YOLO V5 architecture is represented in Fig. 4.

Fig. 4
figure 4

YOLO V5 architecture

In this study, YOLO V5 will be used to classify buildings from two-dimensional satellite images and determine buildings’ location in a 3-dimensional point cloud.

Proposed deep learning and 3D classified map assisted SPPL architecture

Designing an intelligent SPPL architecture requires intensive work, which depends on an AI-aided computer vision algorithm and signal power estimation to achieve optimum transmitter locations. As it is known, the communication environment has many obstacles such as trees, buildings, and some other man-made structure that affects the network’s quality [48, 49]. Thus, as many obstructions as possible should be taken into account throughout propagation planning. Regular systems get benefits from empirical models such as Cost-231-Hata, log normal (LN), and free space path loss (FSPL) models to estimate cellular tower location based on predefined parameters, including environment selection (rural, urban, and suburban) and shadowing factor (\(X_{\sigma }\)). FSPL is the essential path loss model when there is no obstacle in the medium. It only computes attenuation between Transmitter (TX) and Receiver (RX) utilizing the Friis formula indicated in Eq.(4) [50].

$${\text{FSPL}} = 10\log \left( {\frac{{P_{t} }}{{P_{r} }}} \right) = 10\log \left( {\frac{{4\pi d^{2} }}{{\lambda ^{2} G_{t} G_{r} }}} \right)$$
(4)

where \(P_{t}\): transmitter power; \(P_{r}\): receiver power; d: distance between \(P_{t}\) and \(P_{r}\); \(\lambda\): wavelength. \(G_{t}\) transmitter antenna gain: \(G_{t}\) teceiver antenna gain.

However, the FSPL model is not applicable for an obstructed environment due to obstacles of which the scattering and absorption cause an erroneous signal power prediction [51,52,53]. Even the LN, one of the extensions of FSPL, only considers the shadowing effect(\(X_{\sigma }\): N(0,\(\sigma\))) and path loss component (\(\eta\)), which are meant for a specific environment that is inapplicable for different locations as seen in Eq.(5) [54].

$${\text{PL}}[{\text{dB}}] = PL(d_{0} ) + 10\eta \log \left( {\frac{d}{{d_{0} }}} \right) + X_{\sigma }$$
(5)

where \(d_0\): close-in reference distance(1 meter); \(\eta\): path loss exponent; \(X_{\sigma }\): N(0,\(\sigma\))) normal distribution.

A similar idea can be applied to the Cost-231 Hata model since it is only meant for a detailed range of frequencies and does not contain many variations of the environment such as elevation, trees, buildings, etc. The mathematical model is demonstrated in Eq.(6) [55].

$$\begin{aligned} {\text{PL}}[{\text{dB}}] =\, & 46.3 + 33.9\log (f) - 13.82\log (h_{B} ) - a(h_{R} ,f) \\ & + (44.9 - 6.55\log (h_{B} ))\log (d) + C \\ \end{aligned}$$
(6)

where \(a(h_R,f)=(1.1 log(f) - 0.7)h_R-(1.56log(f)-0.8)\); \(C=0\) dB in Suburban areas; \(C=3\) dB in Metropolitan areas. In summary, the performance parameters of the empirical models are defined in Table 1.

Table 1 Performance parameters of empirical models

In this work, we propose an optimal SPPL architecture that uses sensor fusion and image processing, including state-of-the-art deep learning systems and libraries such as YOLO V5 and TensorFlow to deal with minor variations in the environment and have maximum accuracy, as seen in Fig. 5. According to our architecture, three data sources were collected from LiDAR, 2D satellite images from Google Map, and reference signal received power (RSRP) level from mobile phones. These resources have different processes throughout the architecture to achieve maximum accuracy. The process initializes with manual annotations of georeferenced 2D images to identify the building class object. The obtained images and corresponding annotations go through the training process with the YOLO V5 algorithm, which is one of the states of art object detection algorithms to obtain a building detection model. Once the training process is completed, the optimized weights are used to locate and detect buildings and vegetation. The primary purpose of color classification is to classify irregular terrain patterns such as trees and power lines higher than a regular human height (1.65 meters). Counting objects higher than humans is crucial in maintaining network connectivity with lower process time. It should be noted that obtaining the object will help us assign an associated coefficient to the final SPPL prediction. However, since the georeferenced 2D satellite image is insufficient to acquire the elevation, the georeferenced 3D LiDAR image should also be considered [56]. For this purpose, the raw georeferenced LiDAR data are converted to the Cartesian coordinate system (x,y,z coordinates) to visualize the surfaces of the desired environment, as demonstrated in Fig. 6.

Fig. 5
figure 5

Optimal SPPL architecture

Fig. 6
figure 6

Cartesian model presentation of a LiDAR point cloud and corresponding 2D satellite image

Via inspecting the original image represented in Fig. 6, one can use the information of 2D georeferenced satellite image as a reference to locate the position of the buildings in the 3D georeferenced LiDAR point cloud. By doing so, the elevation of every data point can be extracted from the environment. The same logic is also applicable for vegetation detection using color classification. After object detection and classification, the path loss exponent of the environment (\(\eta _v=1\pm 0.5\) dB, \(\eta _b=3.4\pm 1.5\) dB ) can be computed through the average vegetation path loss difference per obstacle(\(\Delta PL_{veg}\)) and average building loss difference per obstacle (\(\Delta PL_{building}\)) as seen in Eq.(7).

$$\eta _{v} = {\text{Mean}}\left( {\Delta {\text{PL}}_{{{\text{veg}}}} } \right)\;\eta _{v} = {\text{Mean}}\left( {\Delta {\text{PL}}_{{{\text{building}}}} } \right)$$
(7)

Employing computer vision processes and path loss difference (\(\Delta PL_{veg}\) and \(\Delta PL_{building}\)), the SPPL prediction parameters can be obtained as indicated in Table 2. It should be noted that implementing SPPL will require some constraints for vegetation since the natural environment complexity will still have some uncertainties for maintaining the network connectivity. Therefore, our approach will apply the following constraints to count vegetation as an object: Gaussian filter \(3\times 3\), vegetation height\(\ge 1.65m\), vegetation width\(\ge 2m\), distance peak to peak\(\ge 2m\). Taking all into account, the optimum SPPL formula toward LOS can be achieved by aggregation of the Friis formula, YOLO V5 object detection, and color classification (CC) algorithm, as seen in Eq.(8).

$${\text{LOS}}\left( {{\text{SPPL}}} \right)[{\text{dB}}] = {\text{FSPL}}[{\text{dB}}] + {\text{CC}}\left( {\sum\limits_{{i = 0}}^{{n_{{{\text{tree}}}} }} {\eta _{{v_{i} }} } [dB]} \right) + {\text{YOLOV}}5\left( {\sum\limits_{{i = 0}}^{{n_{{{\text{buildings}}}} }} {\eta _{{b_{i} }} } [{\text{dB}}]} \right)$$
(8)
Table 2 Obtained SPPL training parameters

Mean absolute percentage error (MAPE)

Prediction accuracy is one of the essential components that can be used to validate the model. Therefore, Mean Absolute Percentage Error(MAPE) is used as a statistical method to measure how accurate the system is. This method is based on the percentage representation of average error between ground truth and estimated values from the models. MAPE of SPPL findings in the LOS direction can be calculated in Eq.(9) [57].

$${\text{LOS}}({\text{MAPE}}) = \frac{{100}}{n}\sum\limits_{{i = 0}}^{n} {\left| {\frac{{{\text{Ground}}\;{\text{Truth - Model}}}}{{{\text{Ground}}\;{\text{Truth}}}}} \right|}$$
(9)

Analysis and results

This research compares three commonly used signal strength empirical models with the proposed SPPL model. The software and AI libraries used for this analysis are Google Collab, MATLAB, Python, TensorFlow, OpenCV, and Darknet. Also, a PCTEL-Seegull EX RF scanner with an antenna frequency from 698-3000 MHz is utilized. The scanner is set for LTE frequencies of the AT &T service provider, which has 850 MHz and 20 MHz channel bandwidth. For the experiment, a 3D point cloud and a 2D satellite image are selected from the Florida Tech neighborhood between the coordinate of(Lat:-80.619091,Lon: 28.062167) and coordinate of (Lat:-80.616669, Lon: 28.063713). The process starts with dataset preparation. The dataset is prepared using 256 satellite images and 2992 building annotations. The dataset is divided into 80% training, 10% validation, 10% test set. The training hyperparameters, class name, number of the epoch, maximum batch size, GPU, and learning rate, are set to buildings, 100, 16, enabled, and 0.01. After the training is completed, the best model is obtained at 96% and 47% accuracy(the mAP 0.5 and mAP 0.95, respectively). The box and abjectness score is also obtained as 0.05067 and 0.1678, respectively. The obtained building detection results are indicated in Fig. 7.

Fig. 7
figure 7

YOLO V5 training results for building detection for Florida Tech neighborhood a Training results b YOLO V5 building detection results c Survey area results

The building detection results demonstrate that the obtained model has a significant performance level for building detection. Since we are using a 3D environment and trying to get the best prediction for the height of the buildings and minimize vegetation-related errors, we have removed the area from the bounding boxes. This procedure is essential in terms of keeping the detection area height over the LOS to take the path loss properly into account. The results are represented in Fig. 8.

Fig. 8
figure 8

3D Building detection process using 2D/3D fusion a 2D building filtering b 2D/3D fused image

Unlike the building detection process, YOLO V5 poorly performs on vegetation detection in 3D platforms due to the complex structure of the environment. Therefore, the YOLO V5 algorithm is replaced with the ICC algorithm to classify colors with predefined limitations explained in section 2. Here, the primary objective is to detect the location and number of vegetation objects that block the signal propagation because their heights are higher than the average human height. Therefore, we have extracted only trees with minimum height and width limits of 1.65m and 2m, respectively. In order to see the performance of the SPPL model, the transmitter and receiver are located on 3D vegetation filtered point cloud. The detected results are shown as a + sign with LOS side view, as seen in Fig. 9.

Fig. 9
figure 9

3D point data cloud from the Florida Tech neighborhood in Melbourne a Filtered vegetationb Detected trees

The results demonstrate that the 8 trees and 1 building(\(object_{locs}\)) are detected in the LOS direction with 77.6% average accuracy. For every tree and building, average losses of \(1\pm 0.5\) dB and \(3.4\pm 1.5\) dB are added into account, respectively. Since the required objects are identified in the LOS direction, we implemented our SPPL model along with empirical models. The obtained path loss results are presented in Table 3 and Fig. 10. From Table 3, one can see that the LOS path loss changes every time the signal encounters an object. The empirical model shows a higher amount of loss within 250 meters range than our SPPL model. In order to see the performance of the systems, we took actual RSL measurements from the same area in the LOS direction and compared the results. Fig. 11 and Table 4 illustrate the RSL versus the distance. The results show that our proposed SPPL model outperforms the empirical model-based deployments after the distance of 33m. This is because buildings and trees are taken into account with respect to their effect on signal propagation. The higher number of vegetation and building the lower the performance will be observed. According to RSL MAPE results for all path loss models, the proposed model has significant improvement with a 3.95% estimation error. The results also demonstrate that FSPL, LN, and Cost-231 Hata models have relatively higher errors with 14.53%, 6.08%, and 15.26%, respectively.

Fig. 10
figure 10

Obtained path loss results

Fig. 11
figure 11

Obtained path loss results

Table 3 Path loss comparison the models
Table 4 RSL comparison of RSL the models

Discussion

Since 5G and similar technologies are rapidly taking place globally, the importance of quality of service (QoS) becomes one of the fundamental criteria for success. However, current SPPL models such as LN and Cost-231 Hata usually fail to fulfill the requirements because empirical models are not intelligent enough to cover and classify micro-variations on irregular terrains. It is known that irregular terrains, buildings, and vegetation, affect the health of the propagation significantly due to the multipath reflections. Moreover, empirical models are limited by the environment (Urban and Suburban) and location (Orlando, Tokyo). When they are implemented in a different environment, they end up with erroneous predictions. Therefore, an intelligent characterization and the classification of the terrains become more critical in terms of path loss estimation. As a solution, we developed a method that utilizes 3D point cloud and 2D satellite images through YOLO V5 and Color classification algorithms. Satellite images and LiDAR data together provide enough information to simulate an environment’s geographical and man-made structures. These additional data and deep learning techniques result in a complex and more accurate RF propagation model that works in every environmental situation, unlike traditional models. The tree canopy and buildings that play a key role in SPPL even at lower 4G band frequencies will increase their influence considerably in higher frequencies provided by 5G and beyond communication systems. The prediction results indicate a significant improvement in terms of RSL estimations. This is because our model does not use a constrained model limited by the environment when there is no object between the transmitter and receiver. It only corrects its prediction when it encounters an object such as buildings and trees. Although the difference in buildings’ structural materials and types of trees may have a variable impact on SPPL, the prediction accuracy can be increased by measuring the propagation loss of common trees and building types in that environment. In future work, the model will be used in harsh environments for RF propagation like forested areas and metropolitan cities to provide further validation.

Conclusion

This paper represented an intelligent SPPL system that implements state-of-the-art computer vision and deep learning algorithms such as ICC and YOLO V5 to deploy wireless communication systems. Through ICC and 3D classified map, the objects in the environment are separated into vegetation and buildings since the type of the object, mainly vegetation, has a significant effect on signal path loss. According to the YOLO V5 object detection results, the buildings are detected with 96% accuracy (mAP 0.5) and 47% (mAP 0.95). In addition, the trees which block signal propagation are detected with 77.6% accuracy using ICC and 2D/3D image fusion. The desired features (\(\eta _{v_i}\) and \(\eta _{b_i}\)) from 2D/3D Fused image are obtained to compute the final LOS (SPPL). According to mean absolute error results for all path loss models, the proposed model has significant improvement with a 3.95% estimation error, whereas FSPL, LN, and Cost-231 Hata models have relatively higher errors with 14.53, 6.08, and 15.26%, respectively.

In future work, this work can be implemented on 5G and 6G communication systems to obtain optimum SPPL during the deployment of eNodeBs. To increase the classification accuracies some complex computer vision filters can be employed. Moreover, the indoor LiDAR data can be merged with current systems to cover Bluetooth and Wi-Fi technologies.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

WSN:

Wireless sensor network

SPPL:

Signal power path loss

LOS:

Line of sight

IMU:

Inertial measurement unit

GPS:

Global positioning system

FCC:

Federal communications commission

Geo-MMS:

Geographical mobile mapping system

RSL:

Received signal level

UAV:

Unmanned aerial vehicle

INS:

Inertial navigation system

YOLOV5:

You Only Look Once version 5

4G:

Fourth generation

5G:

Fifth generation

ICC:

Image color classification

mAP:

Mean absolute precision

MAPE:

Mean absolute percentage error

CI:

Confidence interval

3D:

3 Dimensional

LiDAR:

Light detection and ranging

AI:

Artificial intelligence

LTE:

Long term evolution

ANN:

Artificial neural network

RNN:

Recurrent neural network

CNN:

Convolutional neural network

FIU:

Florida international university

FPN:

Feature pyramid network

LN:

Log normal

FSPL:

Free space path loss

RSRP:

Reference signal recevied power

QoS:

Quality of service

References

  1. S. Sur, T. Pefkianakis, X. Zhang, K.H. Kim, Towards scalable and ubiquitous millimeter-wave wireless networks. in (2018), pp. 257–271

  2. Z. Chi, X. Liu, W. Wang, Y. Yao, T. Zhu, Leveraging ambient lte traffic for ubiquitous passive communication, in (2020), pp. 172–185

  3. T. Mukarram, K. Shrivastava, B. Sainath, Millimeter wave wireless system modeling with best channel selection policy, in IEEE, (2020), pp. 1–6

  4. J.K. Arthur, L. Forgor, E. Effah, Analysing the effect of Mimo configuration on the throughput of Lte networks in multipath environments, in IEEE, (2019), pp. 1–9

  5. N. Polat, M. Uysal, An experimental analysis of digital elevation models generated with Lidar Data and UAV photogrammetry. J. Indian Soc. Remote. Sens. 46(7), 1135–1142 (2018)

    Article  Google Scholar 

  6. J. Lee, K.C. Lee, S. Lee, Y.J. Lee, S.H. Sim, Long-term displacement measurement of bridges using a LiDAR system. Struct. Control. Health Monit. 26(10), e2428 (2019)

    Article  Google Scholar 

  7. Z. Li, J. Tan, H. Liu, Rigorous boresight self-calibration of mobile and UAV LiDAR scanning systems by strip adjustment. Remote Sens. 11(4), 442 (2019)

    Article  Google Scholar 

  8. C. Cortes, M. Shahbazi, P. Ménard, UAV-LiCAM system development: calibration and geo-referencing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. (2018). https://doi.org/10.5194/isprs-archives-XLII-1-107-2018

    Article  Google Scholar 

  9. J. Yan, C. Wang, S. Xie, L. Wang, Design and validation of a surface profiling apparatus for agricultural terrain roughness measurements INMATEH-Agricultural. Engineering 58(3), 169–180 (2019)

    Google Scholar 

  10. H.K. Kim, J.H. Park, H.Y. Jung, An efficient color space for deep-learning based traffic light recognition. J. Adv. Transp. (2018). https://doi.org/10.1155/2018/2365414

    Article  Google Scholar 

  11. H. Zhou, Artificial Neural Network, in (Springer, 2020) pp. 163–187

  12. Wu. Yc, Feng Jw, Development and application of artificial neural network. Wirel. Pers. Commun. 102(2), 1645–1656 (2018)

    Article  Google Scholar 

  13. I. Gonzalez-Fernandez, M. Iglesias-Otero, M. Esteki, O. Moldes, J. Mejuto, J. Simal-Gandara, A critical review on the use of artificial neural networks in olive oil production, characterization and authentication. Crit. Rev. Food Sci. Nutr. 59(12), 1913–1926 (2019)

    Article  Google Scholar 

  14. H.K. Ghritlahre, R.K. Prasad, Development of optimal ANN model to estimate the thermal performance of roughened solar air heater using two different learning algorithms. Ann. Data Sci. 5(3), 453–467 (2018)

    Article  Google Scholar 

  15. A. Bochkovskiy, C.Y. Wang, H.Y.M. Liao, Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)

  16. J. Clyburn, R.P. O’Rielly, Acceleration of broadband deployment by improving wireless facilities siting policies. Federal Commun. Comm. 20, 311 (2018)

  17. E. Westberg, J. Staudinger, J. Annes, V. Shilimkar, 5G infrastructure RF solutions: challenges and opportunities. IEEE Microw. Mag. 20(12), 51–58 (2019)

    Article  Google Scholar 

  18. I. Chih-Lin, S. Han, S. Bian, Energy-efficient 5G for a greener future. Nat. Electron. 3(4), 182–184 (2020)

    Article  Google Scholar 

  19. S. Demetri, G.P. Picco, L. Bruzzone, Estimating low-power radio signal attenuation in forests: a LiDAR-based approach. in IEEE, (2015), pp. 71–80

  20. O. Ahmadien, H.F. Ates, T. Baykas, B.K. Gunturk, Predicting path loss distribution of an area from satellite images using deep learning. IEEE Access 8, 64982–64991 (2020)

    Article  Google Scholar 

  21. C. Chen, W. Gong, Y. Hu, Y. Chen, Y. Ding, Learning oriented region-based convolutional neural networks for building detection in satellite remote sensing images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 42, 461 (2017)

    Article  Google Scholar 

  22. R.B. Hegde, K. Prasad, H. Hebbar, B.M.K. Singh, Feature extraction using traditional image processing and convolutional neural network methods to classify white blood cells: a study. Australas. Phys. Eng. Sci. Med. 42(2), 627–638 (2019)

    Article  Google Scholar 

  23. A. Pawar, Image detection for defence and surveillance using machine learning. Int. J. Innov. Sci. Res. Technol. 5(1), 186–189 (2020)

    Google Scholar 

  24. J. Thrane, D. Zibar, H.L. Christiansen, Model-aided deep learning method for path loss prediction in mobile communication systems at 2.6 GHz. IEEE Access 8, 7925–7936 (2020)

    Article  Google Scholar 

  25. A. Klautau, N. González-Prelcic, R.W. Heath, LIDAR data for deep learning-based mmWave beam-selection. IEEE Wirel. Commun. Lett. 8(3), 909–912 (2019)

    Article  Google Scholar 

  26. E. Krijestorac, S. Hanna, and D. Cabric, Spatial signal strength prediction using 3D maps and deep learning. in ICC 2021-IEEE international conference on communications. IEEE, (2021)

  27. J. Li, D. Lu, G. Zhang, J. Tian, Y. Pang, Post-disaster unmanned aerial vehicle base station deployment method based on artificial bee colony algorithm. IEEE Access 7, 168327–168336 (2019)

    Article  Google Scholar 

  28. O. Andryeyev, A. Mitschele-Thiel, Increasing the cellular network capacity using self-organized aerial base stations. in (2017), pp. 37–42

  29. M. Alzenad, A. El-Keyi, F. Lagum, H. Yanikomeroglu, 3-D placement of an unmanned aerial vehicle base station (UAV-BS) for energy-efficient maximal coverage. IEEE Wirel. Commun. Lett. 6(4), 434–437 (2017)

    Article  Google Scholar 

  30. D. Pliatsios et al., Drone-base-station for next-generation Internet-of-Things: a comparison of swarm intelligence approaches. IEEE Open J. Antennas Propag. 3, 32–47 (2021)

    Article  Google Scholar 

  31. R. Singh, M. Thompson, S.A. Mathews, O. Agbogidi, K. Bhadane, K. Namuduri, Aerial base stations for enabling cellular communications during emergency situation. in: IEEE, (2017), pp. 103–108

  32. T.G.J.Z.J.C. Chiang. Mobile mapping technologies, in Urban Book Series. Science and Business Media Deutschland GmbH, (2021), pp. 439–465

  33. J. Li, L. Ma, Y. Fan et al., An image stitching method for airborne wide-swath hyperspectral imaging system equipped with multiple imagers. Remote Sens. 13(5), 1001 (2021)

    Article  Google Scholar 

  34. S. Du, X. Li, H.A. Lauterbach, D. Borrmann, A. Nüchter, Combining LiDAR scan matching with stereo visual odometry using curvefusion, in IEEE. (2021), pp. 335–339

  35. J.S. Berrio, M. Shan, S. Worrall, E. Nebot, Camera-lidar integration: probabilistic sensor fusion for semantic mapping. IEEE Trans. Intell. Transp. Syst. 23(7), 7637–7652 (2021)

    Article  Google Scholar 

  36. LiDAR Elevation Data Download App FIU GIS Center. https://maps.fiu.edu/gis/projects/lidar-elevation-data-download-app. Accessed 06 May 2021

  37. Y. Egi, C.E. Otero, Machine-learning and 3D point-cloud based signal power path loss model for the deployment of wireless communication systems. IEEE Access 7, 42507–42517 (2019)

    Article  Google Scholar 

  38. W. Li, H. Qiu, Y. Tang, L. Liao, Z. Zhang, Image colorization using regression, classification and Gan. Retrieved January 2, 2022, from: https://weijil.com/uploads/Colorization_Paper.pdf

  39. H.A. Nugroho, R.D. Goratama, E.L. Frannita, Face recognition in four types of colour space: a performance analysis. IOP Conf. Ser. Mater. Sci. Eng. 1088, 012010 (2021)

    Article  Google Scholar 

  40. D.M. Momtaz, A. Khaloo, D. Lattanzi, Color-space analytics for damage detection in 3D point clouds. Struct. Infrastruct. Eng. 18(6), 775–788 (2021)

    Article  Google Scholar 

  41. P. Ren, L. Wang, W. Fang, S. Song, S. Djahel, A novel squeeze YOLO-based real-time people counting approach. Int. J. Bio-Inspir. Comput. 16(2), 94–101 (2020)

    Article  Google Scholar 

  42. T. Mahendrakar, R.T. White, M. Wilde, B. Kish, I. Silver, Real-time satellite component recognition with YOLO-V5, in Small Satellite Conference (2021)

  43. Y. Fang, X. Guo, K. Chen, Z. Zhou, Q. Ye, Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model. BioResources 16(3), 5390–5406 (2021)

    Article  Google Scholar 

  44. R. Xu, H. Lin, K. Lu, L. Cao, Y. Liu, A forest fire detection system based on ensemble learning. Forests 12(2), 217 (2021)

    Article  Google Scholar 

  45. J. Yao, J. Qi, J. Zhang, H. Shao, J. Yang, X. Li, A Real-time detection algorithm for kiwifruit defects based on YOLOv5. Electronics 10(14), 1711 (2021)

    Article  Google Scholar 

  46. W. Jia, S. Xu, Z. Liang, Y. Zhao, H. Min, S. Li, Y. Yu, Real-time automatic helmet detection of motorcyclists in urban traffic using improved YOLOv5 detector. IET Image Proc. 15(14), 3623–3637 (2021)

    Article  Google Scholar 

  47. N. Ryoko, T. Nishio, T. Murase, IEEE 802.11 ad communication quality measurement in in-vehicle wireless communication with real machines. in IEEE, (2020), pp. 0700–0706

  48. P.D.P. Adi, A. Kitagawa, A performance of radio frequency and signal strength of LoRa with BME280 sensor. Telkomnika 18(2), 649–660 (2020)

    Article  Google Scholar 

  49. S.K. Khan, M. Farasat, U. Naseem, F. Ali, Performance evaluation of next-generation wireless (5G) UAV relay. Wirel. Pers. Commun. 113(2), 945–960 (2020)

    Article  Google Scholar 

  50. C. Lin, F. Gao, H. Dai, J. Ren, L. Wang, G. Wu, Maximizing charging utility with obstacles through fresnel diffraction model, in IEEE, (2020), pp. 2046–2055

  51. P. Rodríguez-Vázquez, M.E. Leinonen, J. Grzyb, N. Tervo, A. Parssinen, U.R. Pfeiffer, Signal-processing challenges in leveraging 100 Gb/s wireless THz, in IEEE, (2020), pp. 1–5

  52. M. Zoula, M. Prágr, J. Faigl, On building communication maps in subterranean environments, in (2020), pp. 15–28

  53. B. Yang, L. Guo, R. Guo, M. Zhao, T. Zhao, A novel trilateration algorithm for RSSI-based indoor localization. IEEE Sens. J. 20(14), 8164–8172 (2020)

    Article  Google Scholar 

  54. O. Simeon, Analysis of effective transmission range based on Hata model for wireless sensor networks in the C-band and Ku-band. J. Multidiscip. Eng. Sci. Technol. (JMEST) 7(12), 13673–13679 (2020)

    Google Scholar 

  55. C. Liu, Y. Zhan, Q. Deng, Y. Qiu, A. Zhang, An improved differential box counting method to measure fractal dimensions for pavement surface skid resistance evaluation. Measurement 178, 109376 (2021). https://doi.org/10.1016/j.measurement.2021.109376

    Article  Google Scholar 

  56. G. Nazaré, R. Castro, L.R. Gabriel Filho, Wind power forecast using neural networks: tuning with optimization techniques and error analysis. Wind Energy 23(3), 810–829 (2020)

    Article  Google Scholar 

  57. D. Chicco, M.J. Warrens, G. Jurman, The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. Peer J Comput Sci 7, e623 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

There is no funding to report.

Author information

Authors and Affiliations

Authors

Contributions

All the authors contributed to the paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Yunus Egi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Egi, Y., Eyceyurt, E. Classified 3D mapping and deep learning-aided signal power estimation architecture for the deployment of wireless communication systems. J Wireless Com Network 2022, 107 (2022). https://doi.org/10.1186/s13638-022-02188-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-022-02188-2

Keywords

  • Sensor fusion
  • LiDAR point cloud
  • Cellular tower deployment
  • Deep learning
  • YOLO V5