A software tool, implementing a semi-deterministic model which provides swift predictions, accounts for all significant physical phenomena while utilising a clear-cut description of material properties, is presented in this article. The software can predict signal coverage, delay profiles and angle of arrivals for receivers located anywhere in a scenario using three probabilistic parameters to describe each of the materials. Probabilistic parameters can be efficiently optimised based on measured data by a genetic algorithm optimiser contained in the software tool enabling real material properties (constants) to be avoided. The semi-deterministic model is briefly described whereas its implementation into the tool is explained in greater detail. Measured narrowband, as well as wideband, data are presented, and the basic principles of the computer optimisation utilised in the tool are shown. Optimised results are compared with the measured ones and deviation is determined. The principle of a simple multi-threading algorithm which improves the tool performance while decreasing time consumption is presented along with computational times where a different number of threads are compared.
The design of modern, efficient, wireless systems requires site-specific planning. The location of the base stations forming modern digital systems is optimised on the basis of path-loss predictions for providing users with coverage enabling reliable, high-speed data transfers. Path-loss predictions can sometimes provide incomplete information on the radio channel as they do not provide information on fading characteristics since they do not fully consider multipath propagation. Such characteristics as impulse response (delay profile) or angle-of-arrival (AoA) predictions can complete the information provided by path-loss predictions and improve the design performance.
Many models for indoor propagation predictions have already been proposed [1–3]. There are many possible classifications of these, but two basic groups are usually identified. One group, the empirical models (e.g. One Slope Model ), utilises relatively simple formulas containing empirical parameters to estimate path-loss and therefore do not require a database of exact positions of obstacles. They provide rapid predictions, but due to their relatively simple principle of function they cannot achieve a high level of accuracy. The second group, the deterministic models (e.g. Full-wave, Ray-optical or Moment-method models ), is based on an approach requiring an accurate and complete database of obstacles including their material properties. This approach uses a rigorous description of electromagnetic wave propagation, which is, unfortunately, characterised by high time consumption. Most deterministic models utilise the Finite-Difference Time-Domain method (a full-wave method based on the numerical solution of Maxwell's equations ), Ray-tracing (Ray-optical method based on finding all possible paths between the transmitter and receiver ) or a similar approach (to find all significant paths between the transmitter and receiver) in combination with electromagnetic field theory (to solve ray/obstacle interaction using Fresnel equations and UTD/GTD ). Ray-optical methods fail when the size of the obstacles is not much larger than a wavelength, while full-wave methods require an extremely long amount of time and memory to run.
Apart from the two model categories mentioned above, we can distinguish two other groups of models which cannot be classified into either of the basic categories as they use a combination of deterministic and empirical approaches which we refer to as semi-empirical and semi-deterministic models. Semi-empirical models (e.g. Multi-Wall Model) benefit from a speedy empirical approach, but in contrast to pure empirical models, account for materials and the position of obstacles. Semi-deterministic models (e.g. Dominant Path Model , Motif Model ) represent a compromise between time-consuming deterministic models and less accurate empirical models. In general, the semi-deterministic models use a rigorous physical approach which is, in contrast to deterministic models, simplified in certain respects. For example, the time-consuming solution of ray/obstacle interactions usually computed by Fresnel equations and UTD/GTD can be replaced, for example, by simple, straightforward probabilistic relations .
There are many software tools employing a variety of models on the market today. Some representative commercial tools are EDX , Winprop  and Ranplan software , in addition to some non-commercial tools as well such as the Grass-Raplat project . The models featured in such tools are usually modified by a developer to return highly accurate results and decrease computation time. Most software tools provide fast simulations; thanks to accelerating methods such as the pre-processing of the obstacle database , but the majority of these tools do not take material roughness and diffuse scattering into account effectively. Another disadvantage is that models necessitate detailed knowledge of material properties when solving ray/obstacle parameters which cannot be optimised in a straightforward way since the number of parameters and rays is usually too high for a fast calibration.
We have developed a tool which, in contrast to public available tools, effectively models material roughness. Moreover, material properties are defined using only four parameters tuned by measurements, which is an unequivocal and effective way of material description.
The aim of this article is to present a propagation tool utilising a 3D site-specific model considering all significant physical phenomena (penetration, reflection, diffraction and diffuse scattering) while providing both narrowband (signal coverage) and wideband predictions (delay profile, AoA), thus enabling detailed designs of indoor scenarios and fast semi-deterministic calculations based on a simplified approach. Unlike many other site-specific models, our model does not require a database of material properties which form obstacles in a scenario as the material properties are replaced by a few probabilistic parameters which are optimised by means of a measurement campaign.
The rest of this article is organised as follows: The following section briefly describes the basic principles of the model implemented in the software tool. Sections 3 and 4 deal with model implementation and calibration, respectively. The fifth section, featuring a brief description of test scenarios, a measurement campaign and model calibration, deals with the tool performance verification. The section also shows a compelling comparison of simulation time-consumption performed with a variety of threads, as well as the difference in computational time between a pre-processed scenario and a scenario without pre-processing. The last section summarises the features and benefits of our tool.
2. Propagation model
The principles of the 3D model for indoor propagation predictions implemented in our tool, which are completed by diffraction phenomena, are based on the algorithms developed for long, straight tunnels . The model is based on a fast semi-deterministic approach utilising the modified Ray-Launching method, where electromagnetic waves are substituted by a high number (an infinite number in an ideal case) of plane waves represented by their directional vectors/rays.
The underlying principle of our method is based on the presumption that each ray launched from the transmitting antenna carries an equal part of the overall transmitter power given by the number of launched rays and overall power. Rays are launched from the transmitting antenna according to an antenna radiation pattern (Figure 1a) which is converted into a launching pattern (Figure 1b) , thereby determining the number of rays that are launched in specific directions.
A launching pattern, based on the antenna pattern, is created (Figure 1a) by firstly transforming values of directivity to the probabilistic values, meaning that each angle is assigned with a launch probability. On the basis of this probability, the distribution function (Figure 1c) is created. If an extremely large number of rays is launched, the launching pattern provides a perfect approximation of the real antenna pattern.
First and foremost, a random number is generated and a corresponding angle is chosen by means of the distribution function. Once a ray is launched from the transmitter, the intersection with an obstacle is found and the AoA is computed. The subsequent direction of the impinging ray is determined by means of what is referred to as a probability radiation pattern (PRP) (Figure 2) which determines the probability of the subsequent direction of the impinging ray for each spatial AoA. This implies that a single, new ray is generated, instead of the impinging ray, while all significant phenomena (reflection, penetration, absorption and diffraction) are taken into account. The amplitude and phase of the power carried by the rays is not taken into account and received power is given by the number of rays reaching the receiver, which, together with a rapid calculation of ray/obstacle interaction, decreases the time needed for computation. The implication being that our model predicts the mean value of the received power.
The PRP consists of two parts: A directional part expressing a specular reflection proportional to the AoA of an impinging ray, and an omni-directional part which expresses diffuse scattering and is independent of the AoA. The overall probability pattern is given by the sum of these two parts (Figure 2a).
The PRP for flat sections of obstacles is formed by means of only three probabilistic parameters--the probability of absorption (pA), reflection (pRT) and diffuse scattering (pDS)--which can be obtained on the basis of measurement and computer optimisation. The probability of absorption is defined as the ratio of absorbed and incident power. The probability of reflection is represented by a ratio of reflected and emitted power (Figure 2b). The probability of diffuse scattering (Figure 2c) is defined as the ratio of the omni-directionally emitted power and the overall emitted power. A detailed derivation and description of the aforementioned parameters can be found in .
To take into account the influence of diffraction, we have defined a new parameter--a diffraction distance (DDIF) representing the distance from the edge where the ray is thought to be diffracted (Figure 3a). If a ray impinges the obstacle within a distance less than the diffraction distance from the edge, it is diffracted and its subsequent direction will be determined by a corner PRP which differs from the PRP for the flat section of the obstacle. By and large, the diffraction distance depends on wavelength, the material of the obstacles (walls) and their arrangement, as well as on other probabilistic parameters. As with the aforementioned probabilistic parameters, the value of the diffraction distance is either optimised or set as a constant (see Section 5.2). However, it is recommended to set the diffraction distance equal to the grid size as it significantly decreases the time needed to determine the usage of either the flat part or corner PRP. This approach reaps benefits in the form of a rapid determination of the subsequent direction of an impinging ray while accounting for the approximate shape of the diffraction pattern based on measurements .
As was already stated, the description of materials by probabilistic parameters and diffraction distance is beneficial since material properties required for accurately computing ray/obstacle interactions, including diffractions, are not usually known.
3. Model implementation
The principles of the developed model were written into C# language code and equipped with a graphical user interface. The most important parts of the algorithm are described as follows (Figure 4).
After the simulation launch, all relevant information set by the user (information on site, transmitter and receivers, requested outputs, etc.) are integrated and the variables are adjusted to create a database of PRPs for all materials found in the scenario. If there is no pre-processed database of obstacles, the scenario is pre-processed according to a method using space division described in detail in . The next step is to divide the computation into more parallel threads if the user requires a multi-thread calculation. The multi-thread computation is suitable for computers equipped with processors with more than one computational core as it decreases computational time significantly (see Section 5.4). We performed simulations on an Intel Core i7 computer, with four computational cores and hyper-threading technology, meaning that the tasks can effectively be computed in eight parallel threads. However, theoretically, the number of parallel threads in our software is not limited.
After the parallelisation of the computation, the main loop begins and a ray is launched from the transmitter; it propagates within the scenario and passes through grid elements which are recorded into a database of results until a ray/obstacle intersection is found. When an intersection is found and the ray is not absorbed, the subsequent direction of the impinging ray is determined by means of the PRP corresponding to the material of the intersected obstacle. When the ray is absorbed or leaves the scenario, the current iteration is terminated and the next one only begins when a sufficient number of rays providing an accurate prediction has not yet been launched. One way to determine if there is a sufficient number of rays is to compare current results with the results before the launch of a certain number of rays (e.g. before one million iterations). If the results are the same (i.e. the difference is less than the predefined threshold), then a sufficient number of rays has been launched. If not, another assortment of rays needs to be launched.
The loop described above is computed in parallel fashion in all threads, the final results are combined, and the overall results are calculated as the sum of these particular results.
4. Model optimisation
We recommend that the model be calibrated by the optimisation tool utilising principles of computer optimisation, several of which have already been published . We chose a calibration technique based on the genetic evolutionary algorithms .
Each individual  represents a set of four parameters--probability of reflection, absorption and diffuse scattering, and diffraction distance. The optimisation process, as shown in Figure 5, is a process of model tuning (optimisation) based on running repeated narrowband simulations of a scenario (Figure 4) with probabilistic parameters of obstacles (materials) set before each simulation. Results provided by the simulation process are compared with the measured (target) data and the difference is determined. When the difference is equal to zero or lower than a predefined value, the optimisation (tuning) process is finished and probabilistic parameters are set to values to provide results corresponding to the measured data. Accordingly, the model can easily be adjusted, by means of measured data, to any scenario by using just a few probabilistic parameters.
5. Tool performance verification
As mentioned earlier in this article, the model implemented in the tool relies on measurements providing data needed for the model calibration. Although a measurement campaign is the most accurate method to calibrate the model, it has one significant drawback--it can be time consuming and, therefore, costly making it impossible to perform a measurement on each scenario examined. A more effective way is to choose a typical indoor scenario (primary scenario), calibrate the model using measured data to obtain a "universal" set of materials as a result of optimisation process. This set of parameters should also be valid for similar scenarios (secondary scenarios) using the same frequency. If a different frequency is used, the corresponding set of probabilistic parameters needs to be deployed. To verify this idea, we performed the following test.
As the primary scenario, we chose a floor in our university building (Figure 6a, Section 5.1) for which we already had a detailed floor plan and approximate information about wall materials. Floors inside a hotel (Diplomat hotel in Prague--Figure 6b) and commercial office building (Koospol company--Figure 6c) were chosen as secondary scenarios because of limited information on the floor plan and wall materials. Although built for different purposes, they have similar structures and functions, all containing long corridors and many small rooms.
5.1. Measurement campaign
To verify the function of our propagation tool, a measurement campaign inside the primary scenario was carried out. On the basis of a detailed construction plan, a 3D model of the floor (the sixth floor--Figure 6a) of a Czech Technical University building in Prague was made using the site builder tool, which is a part of our software.
We defined five types of obstacles--light wall, heavy wall, glass, metal and floor & ceiling. Colours in Figure 6 represent different types of obstacles and correspond to the colours shown in Table 1.
The signal coverage measurement  (Figure 7) was performed over 144 lines in the primary scenario at a frequency of 1.9 GHz. The output power of 25 dBm was transmitted from a patch antenna at a height of 1.8 m above floor level. Receiver dynamic ranged between -47 and -107 dBm. The height of the receiving antenna was 1.5 m.
Our tool predicts the mean value of path loss so the measured data were averaged using a running RMS window of 20 λ0 for the lines measured in the corridor (with a step of about 0.45 m) and of 5 λ0 elsewhere (measurement with a step of 0.13 m). Figure 7 displays measured data together with averaged values.
The impulse response (delay profile) measurement  (Figures 8 and 9) was performed at five points in the scenario (Rx 1-Rx 5 in Figure 7) by means of a vector analyser with an indirect time domain measurement. The measurement dynamic was about 60 dB and the time resolution about 0.1 ns.
Secondary scenarios (Figure 10) were measured using the same equipment, and data obtained from the measurement was processed as in the case of the primary scenario.
5.2. Optimisation process
The genetic evolutionary algorithms used for model calibration had the following settings: the maximum number of generations, as well as the number of individuals in each generation, was set to 30 based on our experience with the optimiser and as a compromise between performance (optimisation time) and target precision. Individuals were modified by the uniform crossover and a roulette wheel was used as the method of selection for the next generation. We chose five calibration points according to Figure 7 (Rx 1-Rx 5). However, for more complex scenarios we recommend placing a calibration point in each room, or at least to those locations where strong signal fluctuations are expected.
The optimisation process automatically stopped during the 20th generation when the fitness of the best individual was better than predefined goal (3 dB2). The resultant obstacle parameters are shown in Table 1. The probabilistic parameters were set automatically using an optimisation algorithm; diffraction distance was set, in advance, to 3 λ which is approximately equal to the size of the grid used for predictions (0.5 m).
Simulation accounting parameters, shown in Table 1, were performed to provide data to compare the measured and simulated data in the primary scenario. Figures 8 and 9 show a comparison of the measured data and IR prediction.
Signal coverage provided by simulation was compared with the averaged measured data (Figure 7) with the mean value of the difference between predicted results and measurement being -0.02 dB and a standard deviation of 6.91 dB. This demonstrates that the results provided by our model are in good agreement with the measured results. The best agreement is reached at points where receivers are located and conversely, the worst agreement is at points far from receivers.
The optimised values of probabilistic parameters were also used for the other two scenarios (Figure 6b,c). As we did not have a detailed plan showing the position of walls, windows or information about materials, we modelled the scenarios using only two kinds of materials--light and heavy walls. Table 2 shows a comparison among accuracy levels for each scenario expressed by the mean value and standard deviation of the difference between measured data and simulated results, in addition to the time needed for computation (on condition that 109 rays were launched). It is necessary to mention that we used a 3D ray launching algorithm with no form of post-processing or optimisation of number of rays for the comparison.
Computation times show that the time needed for the computation depends mostly on the size of scenario since tracing of the rays is the most time-consuming part of the algorithm. The number of obstacles has almost no influence on time consumption; thanks to the pre-processing method used.
The lower levels of accuracy were achieved for the Diplomat Hotel and the Koospol building (i.e. secondary scenarios) in comparison with the primary scenario which was optimised on the basis of measured data.
Although mean values of difference, as well as standard deviations, indicate that probabilistic parameters optimised for the primary scenario (and used for secondary scenarios) provide slightly lower performance than comparable models  (deviations between 5 and 8 dB), it can be considered as a satisfactory performance with respect to the fact that a database of exact obstacle positions and material properties was not available and only approximated positions of obstacles were taken into account.
5.4. Time-consumption analysis
To show the influence of the multi-thread computation, as well as the pre-processing procedure, on time consumption, we carried out several simulations at different settings using the scenario described earlier. Parallel computation was expected to speed up the computation noticeably. Our computer was equipped with a quad core processor and hyper-threading, consequently it is able to effectively calculate up to eight parallel threads. Figure 11 contains a comparison of computational times for a particular number of threads, ranging from 1 to 12, in use.
As can be seen, a multi-thread computation has a positive influence on computational time. An inverse relationship between the number of used threads and computational time can be observed for between one to four threads. When the number of threads is between four and eight, we still observe a slight rise in performance; thanks to the hyper-threading technology. However, a further increase of parallel computational threads does not bring any benefit in the form of faster computation since the processor does not have the available resources (cores) which could be used by other threads.
There is a pronounced influence of multi-threading on computational time. Furthermore, the pre-processing procedure based on the rectangular grid method  performed on the test scenario lasts less than 1 s, but the resulting computation time is only 0.5% of the computation time when not using any pre-processing method.
A computationally efficient software tool for fast, site-specific indoor predictions and optimisations has been developed and presented. The utilised semi-deterministic measurement-based model provides immediate, detailed predictions of both narrowband and wideband channels parameters. The model is based on unique principles where interactions of the signal and the scenario are solved stochastically while all physical phenomena of the wave propagation are considered simultaneously. Optimisation (tuning) of the model is based on repeated simulations of a scenario, where probabilistic parameters (four for each type of obstacle) are set according to the comparison results of simulated and measured data. This simple and efficient approach provides the robust algorithm with fast convergence.
The prediction tool provides fast calculations of signal coverage, delay profile and AoAs. The core of the prediction tool uses a semi-deterministic model based on a modified ray launching method and a unique way of solving ray/obstacle interaction and if a multi-core processor is used for calculations, the simulation can be parallelised. Genetic algorithms are able to optimise parameters of obstacles so that the model can be adjusted according to the measured data.
The performance and accuracy of the prediction and optimisation tool has been verified on the basis of measured campaigns conducted in office buildings. Both the simulated and measured results were compared and differences were quantified and described.
Our model has the potential to be a powerful part of future wireless systems and heterogeneous networks design, where rapid detailed ad hoc predictions and optimisations for reconfigurable multiple-antenna systems could be required.
Saunders S, Aragon A: Antennas and Propagation for Wireless Communication Systems. Wiley & Sons; 2007.
Zhengqing Y, Iskander MF, Zhijun Z: A fast ray tracing procedure using space division with uniform rectangular grid. IEEE Antennas and Propagation Society International Symposium, 2000 2000, 1: 430-433.
This study was supported in part by the Czech Ministry of Education, Youth and Sport Research (Project no. OC10005) "Intelligent Infrastructures for Cognitive Networks" in the frame of the COST IC0902 project and WUN Cognitive Communications Consortium.
Authors and Affiliations
Department of Electromagnetic Field, Czech Technical University in Prague, Technicka 2, 166 27, Prague 6, Czech Republic
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.