Abstract This study presents a novel method for measuring ground wind speed (WS) using audio data collected from surveillance cameras. The continuous wavelet transform is employed to model wind sounds and capture the dynamic variations over time. A deep‐learning model integrating attention‐enhanced Convolutional Neural Network and Bidirectional Gated Recurrent Unit architectures is developed to extract WS features from the time‐frequency domain of the surveillance audio. For model training, a surveillance audio‐based WS data set is constructed. Extensive experiments demonstrate that the proposed model achieves a WS level prediction accuracy of 84.56% for a self‐constructed data set and 82.25% in real‐world tests. Additionally, the model yielded root mean square error values of 1.84 m/s and 1.49 m/s for two typhoon events. Although challenges remain in improving low‐speed wind measurement accuracy, this approach highlights the potential of a high‐resolution, low‐cost, urban wind observation network using surveillance cameras, significantly enhancing the granularity of urban ground wind observations.

Read original article