‘War never leaves where it found a nation; it is never to be entered upon without mature deliberation. Edmund Burke
Most modern military conflicts have largely been between apparent unequals. Yet victory of the superior power is not easily achieved. A decade ago the Hizbullah was able to grind to a halt the advance of the powerful Israeli forces in Lebanon. The American Army has failed to defeat the apparently ‘ragtag’ Taliban forces even after seventeen years of fighting. In current times the Houthi rebels in Yemen have managed to hold at bay the far more powerful Saudi-led, and America-backed, adversary. In each case the numerically smaller and poorly equipped adversary, mostly non-state actors have deterred the more powerful by resorting to unconventional means. Thereby they have eroded the strength of those with greater numbers and firepower. How has this been possible?
In modern times, it has largely been the case that when two enemies confronted each other in a battlefield milieu, the outcome could be largely predicted on certain straight-forward equation. Higher numbers or greater firepower had better chances of winning. A mathematical model on these lines was developed by Frederick Lanchester, who gave it has name, during the First Great War (1914-1918). His thesis was that, if two forces, occupying the same land-area, and using the same weapons, fired randomly into the same target area, they both would suffer the same rate of casualties, and consequently the smaller force would be eliminated earlier. This is known as Lanchester’s Linear Law.
However, if targets are aimed at and multiple targets receive fire from many directions, then the rate of attrition would depend on the number of weapons that carry out the firing. In such situations, Lanchester’s view was that the power of such a force would be proportional, not to its number of units, but to the square of the number of units. The two principal factors in this equation are, therefore, first, the size of the combating group, and, second, the rate of damaging. Other circumstances are ignored. This formula is known as Lanchester’s Square Laws. Thus, mathematically, it would take an N-squared-fold increase in quality to make up for an N-fold increase in quantity.
The principle has too many limitations to be useful in contemporary conflict. It is only applicable if one unit, that is, one soldier, one tank, one ship or one aircraft can take out one equivalent adversary at one time. It would not, therefore, apply to rapid-firing machine guns, artillery barrages, or nuclear ordnances (such as short-range tactical theatre-weapons), where a single unit could inflict multiple damages. Moreover, it would not apply to whole armies engaged in total war, where the tactical deployment at any given time would not involve total numbers. Yet for decades the formulae provided a set of metrics for Commanders to develop their preferred relevant tactics, particularly when asymmetries in power calculations were involved.
In 1996, two RAND Corporation Researchers in the United States, John Arquilla and David Ronfeldt wrote a seminar work called The Advent of Netwar. The term ‘Netwar’ described an emergent technique by which ‘rogue’ non-state actors might be able to organise themselves to counter more powerful adversaries, using decentralised and more flexible ‘network’ structures. According to the two researchers, three basic types of net works may be used by the non-state actors. These are: 0ne, chain network , where end-to-end-exchanges must travel back and forth between intermediary nodes : Two, the ‘hub’ or ‘star’ network where disparate actors are tied to a central, though not necessarily hierarchical node , through which all communications travel : And , three, the ‘all-channel’ network , where every individual actor is able to communicate fully with all other nodes in the system. The writers point to the fact that it is the third ‘all-channel’ type that is acquiring significance.
A principal element of this is the absence of a central command or key node. The complete decentralisation allows for considerable local autonomy. This implies that, at times, the organisation in question appears to be ‘acephalous’ (headless), or at other times, ‘polycephalous’ (hydra-headed). However, despite the autonomy, a linkage is needed between the nodes in terms of shared doctrines, beliefs, ideologies, or interests. The common objective provides an “ideational, strategic, and operational centrality that allows for tactical decentralisation”.
A more sophisticated version that only conventional and well-equipped armies are capable of conforming to (which excludes non-state actors) is ‘network-centric operations’. It is a doctrine that the United States Department of Defence now pioneers. The idea is to increase competitiveness in war-fighting capability through robustly linking up geographically dispersed forces via information technology. This would require the establishment of a ‘Global Information Grid’. There are those who approach this very cautiously, arguing that the enemy may hack the system by introducing false or incorrect information.
Enter ‘Artificial Intelligence’ (AI)
The introduction of Artificial Intelligence-related weaponry (‘killer-robots’, drones and the like) is likely to radically alter the character of the future battle-field. Early acquisition of this capability will lend a protagonist greater advantage over the rival. Powerful States like the US, China and Russia, as well as non-state actors, are in a fierce competition for its possession. It is associated with development of requisite algorithms and ‘deep learning’, involving layers of neural networks. The AI military machines will be adept at data-analytics processing and interpreting huge quantum of information – with remarkable rapidity. The weapons involved may at some point be able to operate without human supervision, and could even assume command and control roles, not only of the battlefield, but of the larger war itself. It is possible that they will be able to think much faster than human brains, outstripping human cognitive capacity. They would be able to reflect many moves ahead in conflict situations, and autonomously take actions, which even Commanders may not be able to comprehend! An interesting book in this regard is Army of None by Paul Scharre.
These could have many theoretical implications. First, ethical and moral responsibility for action in battle (war-crimes?) cannot be attributed to machines, rendering determination of culpability very difficult. Second, since these weapons will be more precise with possibly less collateral damage, propensity for their use would be greater. Third, since they are not weapons of mass destruction, rules of ‘deterrence’ as in nuclear weapons might not apply, and indeed, since the perceived advantages of early or first use could increase the possibilities of pre-emptive strikes. Finally, unlike in the case of nuclear or chemical weapons, for these would not constitute nuclear weapons, there are no restraining treaties or agreements (these would be far difficult to achieve as there are yet no method of applying restrictive controls on computer soft-wares).
Force-configuration Changes in Battlefield
Currently in some advanced armies, as in the US, enlisted non-commissioned officers are being trained to replace senior officers in tactical command. Squads of 14 or fewer personnel are replacing the platoons of 40 or so, as during the Vietnam war. Larger units commanded by higher ranking officers do exist, such as Brigades of 3000 or so soldiers by a Colonel in the Army or of 500 Marines by a lieutenant Colonel, but in the battlefield smaller squads led by highly skilled non-commissioned officers (the ‘strategic Corporal’?) are seen as better deployable options, possessing the advantage of greater mobility and capacity to avoid detection by the enemies’ artillery with pin-point accuracy. Also, should a squad be neutralized, the impact on the larger formation will be minimal. The Hizbullah in Lebanon used highly mobile small units with very simple command structure.
A corollary of this development is that the General officers, have moved into higher strategic roles (the traditional ones are said to resist this). These may be two-fold: one, supervision of coordination of units and ensuring supplies (earlier, as in the British Army, the Supply Corps’ role was considered far less glamorous!); and, two, tendering advice to civilian policy-makers, based not just on their knowledge of military matters, but also strategic theory, geo-politics, and international relations. But still, all are agreed that the time to cut off the General from the tactical battle field has not yet, or indeed might never, come and the right answer would be to seek a balanced approach.
Tactics of battlefield and Strategies of war remained similar for long periods of history, from the Graeco-Roman times to the Napoleonic Age extending even to the two World Wars. Over the last few decades these have changed enormously. Huge advances in technology have led to momentous changes in both strategy and tactics. Size in numbers and advantages of firepower as factors of power and strength are being eroded by new strides in human (and mechanical) ingenuity. Take the recent devastating strike on the Saudi oil installations by the far weaker Houthi rebels of Yemen. The drones they used apparently cost no more than US $15000 a- piece, which were able to penetrate the hugely expensive defence mechanisms of Saudi Arabia. So, smart procurements by the overtly weaker side in a military conflict might change equations in a way that the slaying of Goliath by David might not only be a Biblical parable, but a contemporary reality. If no one can be certain who wins a war, it is perhaps best avoided.
Dr Iftekhar Ahmed Chowdhury is Principal Research Fellow at ISAS, National University of Singapore, former Foreign Advisor and President of Cosmos Foundation Bangladesh