Accurate vehicle detection from LiDAR point cloud data is essential for the safe operation of autonomous driving systems. However, many existing detection approaches struggle with class imbalance and background bias, which often leads to poor detection of less frequent vehicle categories such as cycles and trucks. This study proposes a Hybrid Spatial-Network Module (HSNM) designed to enhance vehicle class detection in LiDAR-based semantic segmentation. The proposed module integrates Atrous Spatial Pyramid Pooling (ASPP) and Squeeze-and-Excitation (SE) mechanisms within the PointSeg semantic segmentation framework to improve multi-scale contextual feature extraction and channel-wise feature recalibration. The model was evaluated using a dataset of 1,865 LiDAR point clouds acquired with an Ouster OS1 sensor. Experimental results demonstrate consistent improvements across all vehicle categories. The F1-score for cycles increased from 0.6783 to 0.8010, while cars improved from 0.8068 to 0.8917 and trucks from 0.7712 to 0.8922. In addition, the mean F1-score improved from 0.9165 to 0.9651 and the mean Intersection over Union (mIoU) increased from 0.6789 to 0.7246. Statistical evaluation using paired t-tests confirms that the improvements are significant (p < 0.05). Comparative analysis further shows that the proposed approach outperforms existing models such as PointNet++ and VoxelNet while maintaining a lightweight architecture suitable for real-time applications. Runtime analysis indicates a processing time of approximately 42ms per frame on CPU and less than 10ms on GPU. These results demonstrate that the proposed hybrid architecture improves detection performance while maintaining computational efficiency for autonomous driving applications.