Today, ground combat vehicles (GCVs) in Warfighter Information Network-Tactical (WIN-T) systems are highly interconnected and autonomous. However, protecting a large number of wireless communication links against the interception of enemy in a dynamic environment is challenging. Because of GCV mobility, the Low Probability of Intercept (LPI) capacity is easily violated, in particular when multiple interception techniques are used simultaneously. In this paper, we investigate the problem of preserving LPI capability under traditional optimization and Deep Reinforcement Learning (DRL) approaches. Unlike prior work, we propose an anti-interception strategy against both energy-based and correlation-based interceptors techniques. Our strategy jointly optimizes power allocation (PA) and spreading factor assignment (SA) of the WIN-T to avoid these interceptors. The problem is mathematically formulated as a non-convex optimization model, and therefore we solve it by advanced techniques such as decomposition and difference of convex functions (DC). To obtain the optimized solution in near real-time, we design a Multi-Agent Deep Reinforcement Learning (MADRL) strategy. Our numerical results show the performance of the proposed MADRL strategy is close to the optimal solution, making it applicable for the practical systems.