By Charles Emmerson | Posted 4th March 2014
Protesters at Independence Square on the first day of the Orange Revolution, 2004
Not so long ago, looking for a short history of Ukraine in a central London bookstore, I was offered the following memorable advice: “Look under Russia”.
I did. And between shelves groaning with the glories of Russian history, from the love affairs of Catherine the Great to the crimes of Joseph Stalin, I found two thin volumes on Ukraine, a country of some forty six million people. One was decorated with an impressionistic painting of the 2004 Orange Revolution. I bought both. I doubt very much they were immediately replaced.
‘Looking under Russia’ is perhaps an appropriate metaphor for Ukrainian history.
Since the Pereiaslav / Pereyaslav treaty of 1654, Ukraine has only enjoyed statehood independent from Russia at moments of extreme geopolitical dislocation, such as in the final days of the First World War, in the wake of the Russian Revolution of 1917. Russian nationalists today appear to view Ukrainian independence as a similar aberration, the consequence of what President Vladimir Putin labelled the greatest geopolitical disaster of the twentieth century: the collapse of the Soviet Union – a.k.a. the Russian Empire – in 1991.
Old habits die hard. For many Russians, Ukraine is like a phantom limb still felt to be there long after its amputation. The idea that Ukraine is really a nation at all strikes some Russians as odd. To the extent that perceptions of history condition politics, understanding the Russian view of Ukrainian history – and the Ukrainian view of Ukrainian history – is essential.
Though wrong, the idea that Ukrainian history is really just an annex of the sumptuous many-roomed mansion of Russian history is common. To some degree it is understandable. Ukraine and Russia have shared triumph and tragedy from the birth of the Kyivan / Kievan Rus (the first proto-Russian state – though this of course begs the question of whether the Rus was Russian or Ukrainian at all) through the wars against the Poles in the seventeenth century to bloody struggle against fascism in the twentieth.
The historical links between the two countries, ancient and modern, are manifold and profound. The Orthodox churches of Ukraine and Russia share a patron saint – St. Vladmir or St. Volodmyr – whose statue (spelt the Ukrainian way) stands proudly on a street corner in west London. On the edge of Kyiv, the Ukrainian capital, a huge concrete museum complex inaugurated in the early 1980s commemorates the Great Patriotic War (1941-1945). Outside, a silvery figure of a woman, two hundred feet tall, holds a sword aloft in one hand, and a shield with the emblem of the Soviet Union in the other. This is a memorial to shared sacrifice – eight million Ukrainians died in the war – and a shared victory. Seventy years after the end of the war, and nearly a quarter century after the collapse of the Soviet Union, such narratives are still powerful.
For a long time, Russians saw Ukrainians as being little more than country bumpkin relatives. Theories of Slavic ethnogenesis described the two peoples as siblings born of the same Slavic womb: the “Great Russians” (i.e. Russians) on one hand and the “Little Russians” (i.e. Ukrainians) on the other. Ukrainian literature, which began to emerge in the nineteenth century, was patronisingly viewed as the picturesque product of a peasant society, essentially subordinate to Russia’s own literary canon, even when it produced such great poets as Taras Shevchenko. The fact that the flowering of Ukrainian national culture was strongest in western Ukraine, then part of the Austro-Hungarian Empire, made some Russians dismiss the whole thing as an anti-Russian ruse sponsored by external forces, a familiar refrain to those heard today.
In the Soviet period the idea of Ukrainian nationhood was viewed with similar suspicion, now additionally freighted with suggestions it was intrinsically counter-revolutionary. In April 1918, as Russia imploded in revolution, a conservative German-backed regime was set up in Kyiv. Its leader Pavlo Skoropadsky revived the title of Hetman, an ancient Cossack military title, last held by a man who had died aged 112 in 1803, in a remote Russian monastery which the Soviets would subsequently turn into a gulag. Later, in the Great Patriotic War, some Ukrainians signed up with the Germans to fight the Soviets – some even joined the SS. Nationalist anti-Soviet actions continued into the 1950s – providing the basis in historical memory for the contemporary lumping together of even moderate Ukrainian nationalists with right-wing extremists as “fascists” and “bandits”.
In the Soviet era Ukrainian national identity was never completely subsumed into Russian or Soviet identity. Sometimes, indeed, it could be useful to the Soviet state. In 1939, when Galicia, Volhynia, and Bukovyna were annexed to Soviet Ukraine as a result of the Molotov-Ribbentrop pact and Stalin’s co-invasion of Poland, the Ukrainian Supreme Soviet sent this message to Stalin: “Having been divided, having been separated for centuries by artificial borders, the great Ukrainian people are reunited forever in a single Ukrainian republic”. In 1945, professions that Ukraine was not a Soviet vassal but in fact an independent Communist state allowed Ukraine to join the United Nations as a founder member alongside the USSR, thus giving Moscow an extra vote in UN proceedings.
The process through which the borders of modern Ukraine were defined, both in the west and on the Black Sea, was part and parcel of Russia’s own headlong expansion through three centuries of Eurasian history. In the 1700s and 1800s, as the Russian geopolitical imagination became obsessed with the idea of turning the Black Sea into a Russian lake – perhaps even going so far as to seize control of Constantinople/Istanbul – the Ottoman Empire was bloodily and repeatedly pushed back from its redoubts on the northern side of the Black Sea. The Ukrainian provinces were the territorial beneficiaries. The country became ever more tightly integrated into the economics and politics of the growing Russian empire, serving as its breadbasket, and as its route to the sea.
At the end of the eighteenth century, German-born Catherine the Great founded the port of Odessa – and its hinterland of New Russia – with the help of a Spanish-Irish Neapolitan and, later, a French aristocrat. The city filled with Greeks, Bulgarians and Jews. Pushkin was sent there as punishment, and promptly started an affair with the wife of the city’s Russian governor. Amongst countless others, Odessa would ultimately produce Trotsky and Akhmatova, two titans of Russian politics and culture, before becoming the site of some of the cruellest massacres of the Holocaust.
Further east, through war, colonisation and the ethnic cleansing of its Muslim population, Crimea, the last remnant of the Mongol Golden Horde, was turned into the finest jewel in the Russian Empire. As proverbial pleasure garden for late imperial flings (as recounted by Anton Chekhov), then fantasy holiday camp for Soviet factory managers and key to Russia’s southern flank (as base of the Black Sea fleet) Crimea became firmly embedded in Russians’ psychological geography as their own private playground. Less than a century after the Tsars had conquered it, Stalin chose Crimea as the place to redraw the map of Europe once more in 1945.
Nine years later, when former Ukrainian party boss Khrushchev transferred Crimea to the Ukrainian SSR in celebration of the three hundredth anniversary of the Pereiaslav / Pereyslav treaty, there was no thought that the internal borders of the Soviet Union would ever become international borders. It was only in 1991, as a result of an attempted coup (which took place, ironically enough, while Mikhail Gorbachev was on holiday in Crimea) that the peninsula spun out of the ultimate control of Moscow, with the Soviet superstructure itself being legislated out of existence.
The idea that Crimea became part of an independent Ukraine essentially by accident is gospel truth amongst Russian politicians. It is but a short step to view Ukrainian possession of Crimea as historically illegitimate. And therein lies the beginnings of a dangerous game. What happens next? Perhaps Ukrainian independence itself, or that of the Baltic states, is equally seen as the consequence of a set of historical circumstances which some might now like to reverse.
Where does a concern for history shade into revanchism? And how far does one’s historical perspective extend back into the past? Visions of the Crimea as eternally Russian wilfully forget the Muslim population which Russian and then Soviet power displaced and deported – sometimes violently, always tragically, and with little historical recognition. As late as the turn of the last century, before the cataclysms of the twentieth, the Crimean Tatars represented nearly half the people of Crimea. Khrushchev recognised the deportation of the Tatars as one of Stalin’s crimes in his famous 1956 speech to the Twentieth Party Congress. It was not until the 1990s that many were able to come back.
Russia’s version of Ukrainian history, wrapped up in its own narrative of imperial rise and fall, from the Romanovs to the Soviets, helps explain Moscow’s attitude towards its southern neighbour – not in terms of objective interests, though these are real enough, but in terms of emotion, in terms of who is right and who is wrong. What makes things truly bad, from the Russian perspective, is that Ukrainians by and large no longer share the Russian interpretation of their history. The past looks different these days from Kyiv (still more, from Lviv). Instead of Ukrainians cherishing their supporting role in Russia’s geopolitical greatness – which essentially means the power and prestige of the state – Ukrainians have come to cherish alternative narratives of their history, based around freedom and resistance. Rediscovering their past has been a critical part of asserting Ukrainian independence. Accepting the possibility of multiple histories, not just one, is a hallmark of democracy, now vital.
Episodes once viewed as the historical glue of the Russo-Ukrainian relationship have become contested. While Russians tend to see the Pereislav / Pereyaslav treaty of 1654 as a moment of reunification for the Russian and Ukrainian peoples, many Ukrainians see the same treaty as a temporary alliance between military leaders which the Russians subsequently interpreted to their advantage. In 2009, on the three hundredth anniversary of the Battle of Poltava – perhaps the most important battle in Russian eighteenth century history – then-President of Ukraine Viktor Yushchenko was blasted by Russia for suggesting that the Ukrainians who fought with the Swedes against the victorious forces of Russian Tsar Peter the Great were true patriots.
Similarly, while the famines of the early twentieth century used to be viewed as a common experience of Soviet suffering, even as part of the forging of the Soviet industrial miracle, some now argue that the famines were, in effect, a Moscow-led assault on Ukrainians in particular. Some go so far as to suggest genocidal intent. The incorporation of western Ukraine into the Soviet Union in 1939 can still be seen in its traditional light: as the re-unification of the Ukraine under Soviet leadership. But for the old aged pensioners of Lviv – and increasingly for their grandchildren – it may be remembered as the beginning of a fifty-year Russian occupation. And while Ukrainian nationalists in the Great Patriotic War used to be roundly condemned as nothing more than opportunistic, anti-Semitic and fascist lowlifes – which some of them no doubt were – more savoury elements may now be rehabilitated, as in the modern Baltic states, as patriots caught in a vice between the equivalent totalitarianisms of Nazism and Communism. Some Ukrainians make what is, for many Russians, a sacrilegious parallel: Putin as Hitler.
For both Russians and Ukrainians, the interpretation of Ukrainian history is personal. As in all borderlands, the contradictions and complexities of the tangled past are reproduced over and over in the stories of families and in the identities of individuals. For the governments in Moscow and in Kyiv, history is political too. Narratives of the past can be spun to justify, oppose or defend different courses of action in the present. History can be a tool of influence – a tool of long-term psychological warfare even – used to manipulate the here-and-now, to give added emotional resonance to geopolitical imperatives or to claims of political legitimacy.
Bluntly put, history can be a kind of territory. In Ukraine, it is not just the country’s land which is being tussled over. It is the country’s past as well. If Russia and Ukraine are to live as respectful neighbours side by side, they will have to find a way to live with each other’s history too.
Charles Emmerson is the author of 1913: The World before the Great War. Visit his website here
by Ayse Baltacioglu-Brammer
This image depicts Ali, a Jesus-like figure in Alawite theology. Religiously and communally separate from the Muslim populations of Syria, Alawites have ruled for decades through the Assad family. Their unique history sheds light on the civil war raging in Syria today. |
Editor's Note:
The Civil War in Syria has become one of the most bloody and geopolitically important events to come out of the Arab Spring. While the war has become in many ways a sectarian Shi’a-Sunni battle, in Syria there is a third religious group that has played a pivotal role in the history of that country: the Alawites.
When in early March 2011 the “Arab Spring”—a wave of pro-democracy demonstrations that began in Tunisia in late 2010 and swept across Libya and Egypt—finally reached Syria, people from various religious and ethnic backgrounds (Muslims, non-Muslims, and Alawites; Arabs, Kurds, and Turkmen) rallied together to oppose the regime of Bashar al-Assad, the “elected” president of Syria.
The unrest resulted from a combination of socio-economic and political problems that had been building for years and that affect especially Syria’s large rural population. The drought of 2007-2010, high unemployment rates, inflation, income inequality, and declining oil resources all contributed to profound discontent on the part of the opposition movement. Moreover, harsh and arbitrary political repression had also eroded Bashar al-Assad’s long-cultivated facade as a “reformer.”
In the early days of the rebellion, the frequent protest chant, "Syrians are one!" indicated the determination of the demonstrators to show the unity of the opposition movement, which, according to them, was above any sectarian and ethnic division and dispute. In an unusual show of solidarity, in Latakia, the fifth largest city in Syria and one with a major Alawite population, a Sunni Imam led prayers for Alawites, while the Alawite Sheikh led prayers at a Sunni Mosque.
However, two years after the conflict began in the midst of tremendous hope and optimism, it has degenerated into a civil war with more than 100,000 deaths and 2 million refugees. And it has put Syria at the centre of nasty geopolitical struggles involving the United States, Russia, Iran, Lebanon, and Turkey.
The war today has in many ways become a war fought between the majority Sunnis, on one side, and Shiites with the support of minority Alawites on the other.
Alawites are adherents of a syncretistic belief with close affinity to Shiite Islam and, importantly, the Assad family is Alawite. But despite their crucial role in the unfolding struggles in Syria, they are little known outside the region.
In most discussions of the Syrian civil war, the most neglected question is: How and why did an opposition movement that initially included various religious and ethnic segments of the Syrian society against a dictatorial regime turn into another sectarian war between Sunnis and Shiites?
Answering this question requires us to appreciate the peculiar position occupied by Syrian Alawites and the role played by them in the creation of the modern state of Syria.
We also need to understand how sectarian differences have long been used as a political tool by the Assad family—who have ruled Syria since Bashar al-Assad's father Hafez al-Assad took power in 1970s—and, before them, by the French who controlled Syria for much of the 20th century.
Who are the Alawites?
Today Alawites comprise 12-15% of the Syria’s population, or about 2 million people. They mainly live in the mountainous areas of Latakia on the north-western coast, where they constitute almost two-thirds of the population.
The Alawites are composed of several main tribes with numerous sub-tribes. Syria’s Alawites are also divided among two distinct groups: more conservative members of the community, who mainly live in rural regions as peasant farmers and value the traditional aspects and rituals of the belief, and the middle-class, educated, urban Alawites who have been assimilated into Twelver Shi'ism aided by Iranian and Lebanese propaganda. [Twelver Shi'ism is the principal and largest branch of Shiite Islam.]
Syria’s Alawites are a part of the broader Alawite population who live between northern Lebanon and southern-central Turkey. While not doctrinally Shiite, Alawites hold Ali (d. 661), who is considered the first Imam by the Shiites (and the fourth caliph by the Sunnis), in special reverence.
The sect is believed to have been founded by Ibn Nusayr (d. ca. 868), who was allegedly a disciple of the tenth and eleventh Shiite Imams and declared himself the bab (gateway to truth), a key figure in Shiite theology. Alawites were called “Nusayris” until the French, when they seized control of Syria in 1920, imposed the name “Alawite,” meaning the followers of Ali, in order to accentuate the sect's similarities to Shiite Islam.
The origins of the Alawite sect, however, still remain obscure.
While some scholars claim that it began as a Shiite faction, others argue that early Alawites were pagans who adopted themes and motifs first from Christianity and then from Islam. In essence, Alawism is an antinomian religion with limited religious obligations. Despite similarities to the Shiite branch of Islam, some argue that Ibn Nusayr's doctrines made Alawism almost a separate religion.
The Alawites believe in the absolute unity and transcendence of God, who is indefinable and unknowable. God, however, reveals himself periodically to humankind in a Trinitarian form. This, according to the Alawite theology, has happened seven times in history, the last and final being in Ali, Muhammad, and Salman al-Farisi, who was a Persian disciple and close companion of Muhammad.
The Alawites hold Ali to be the (Jesus-like) incarnation of divinity. While mainstream Muslims (both Sunni and Shiite) proclaim their faith with the phrase “There is no deity but God and Muhammad is His prophet,” Alawites assert, “There is no deity but Ali, no veil but Muhammad, and no bab but Salman.”
Alawites, furthermore, ignore Islamic sanitary practices, dietary restrictions, and religious rituals. The syncretistic nature of the Alawite belief is further evident in its calendar, which is replete with festivals of Christian, Persian, and Muslim origin.
Giving Ali primacy over the Muhammad, a feature shared by various ghulāt (Shiite extremist) sects, permitting wine drinking, not requiring women to be veiled, holding ceremonies at night, and several pagan practices have led mainstream Muslims to label Alawites to be often singled out as heretics or extremists.
In Syria, ethnically and linguistically Arab, the Alawites developed certain characteristics that isolated them from the Sunni Syrian population.
Alawites before the 20th Century
Uncertainty about Alawites’ religious identity confused observers and produced suspicion among political and religious authorities that often resulted in persecution over the centuries.
The first proponents of the Alawite faith fled to Syria from Iraq in the 10th century. In the 11th century they were forced out of the Levantine cities and into the inhospitable coastal mountains of north-western Syria, which has remained the heartland of the Alawites ever since.
In the 14th century, Alawite marginalization was perpetuated by the first anti-Alawite fatwa (legal decision) by a Sunni scholar, Ibn Taymiyyah (d. 1328), which essentially proclaimed Alawite belief as heresy. Thereafter Alawites suffered major repression by the Mamluks (r. 1250-1517) who ruled the region. Geographically isolated, Alawites maintained their religious integrity in the face of continuous attacks and invasions.
In the Ottoman era (1517-1918), ill treatment continued as Alawites were considered neither Muslim nor dhimmi (a religious group with certain autonomy with regard to communal practices such as Christians and Jews) by the Ottoman government in Istanbul. On the other hand, during much of Ottoman rule, Alawites could practice their religion and a few enjoyed official positions.
The main reason for tension between central Ottoman authority and the Alawite community stemmed from Ottoman efforts to impose its authority by collecting revenue from their local regions. Alawites, who acquired a reputation as “fierce and unruly mountain people,” frequently resisted paying taxes and plundered the Sunni villages.
Attempts by later Ottoman governments to enrol Alawites in the army served as another reason for the Alawite uprisings and perpetuated the strong resentment towards Sunnis, who had so often been seen as their oppressors.
At the end of the 19th century, Alawites rose up against the Ottoman government demanding more autonomy. The rule of Ottoman sultan Abdulhamid II (r. 1876-1908) did little to diminish these desires, even though he allowed some Alawites to make careers in the Ottoman army and bureaucracy.
The Alawites enjoyed little benefit from the centralized Ottoman government and its largely Sunni-based policies that attempted to convert locals to Sunni Islam through building of mosques in Alawite villages and Sunni training of Alawite children.
The “Turkification” policies pursued by the Young Turks—a group of secularist and nationalist activists who organized a revolution against the Ottoman monarchy in 1908 and ruled the empire until 1918—accelerated the cooperation of the Alawites with new actors in the region: the French.
Alawites during the French Mandate and the Alawite State of 1922
The Alawite region became a part of Syria as a by-product of the notoriously secret 1916 Sykes-Picot Agreement between France and Britain. It was placed under the French mandate after the end of World War I.
After defeating and evicting the British-backed Syrian King Faysal in 1920, France, in a divide-and-rule strategy, partitioned Syrian territories into four parts, one of which was Latakia, where most of the population was Alawite.
By promoting separate identities and creating autonomous zones in Syria along the lines of ethnic and sectarian differences, the French mandate aimed to maximize French control and influence in Syria. Muslim and Christian minorities were the main allies of the French against the Arab nationalism rooted among the urban Sunni elite.
Furthermore, Alawite territory was geographically crucial because French forces could use it to control the whole Levant coast.
During the mandate era, many local leaders supported the creation of a separate Alawite nation. Alawite cooperation with French authorities culminated on July 1, 1922 when Alawite territory became an independent state. The new state had low taxation and a sizeable French subsidy.
This independence did not last long. Although Latakia lost its autonomous status in December 1936, the province continued to benefit from a “special administrative and financial regime.”
In return, Alawites helped maintain French rule in the region. For instance, they provided a disproportionate number of soldiers to the French mandate government, forming about half of the troupes spéciales du Levant.
Alawite peasants, who were not only religiously repressed and socially isolated by mainstream Sunni Muslims but also economically exploited by their fellow Alawite landowners, rushed to enlist their sons for the mandate army. As a result, a large number of Alawites from mountain and rural areas became officers and they formed the backbone of the political apparatus that would emerge in the 1960s.
French policy ultimately served its purpose to increase the sense of separateness between the political centre and the autonomous states in Syria's outlying areas.
In Paris in 1936, when France entered into negotiations with Syrian nationalists about Syrian independence, some Alawites sent memoranda written by community leaders emphasizing “the profoundness of the abyss” between Alawites and Sunni Syrians. Alawite leaders, such as Sulayman Ali al-Assad, the grandfather of Hafez al-Assad, rejected any type of attachment to an independent Syria and wished to stay autonomous under French protection.
Yet the Alawite community remained divided over the future of the community. Despite a deep sense of religious difference, an increasing number of Alawites and Sunni Arabs were coming to believe that the inclusion of Alawites in a unified Syria was inevitable. People both within and without Syria worked toward a rapprochement between the predominant Muslims and minority Alawites.
For instance, Muhammad Amin al-Husseini, the Grand Mufti of Jerusalem, who was the Sunni Muslim cleric in charge of Jerusalem's Islamic holy places from 1921 to 1938 and known as a leading Arab nationalist, issued a fatwa declaring Syrian Alawites to be known as Muslim. With this fatwa, al-Husseini aimed to unite the Syrian people against the Western occupation.
Following in his footsteps, several Alawite sheikhs made further statements emphasizing their adherence to the Muslim community (albeit Shiite) and to Arab nationalism. Also, a group of Alawite students were sent to the Najaf province of Iraq to be trained on the Shiite doctrines of Islam.
For all the benefits of French rule for the Alawites, the French Mandate ultimately did little to improve the economic conditions of the Alawite population as a whole. Newly-emerged ideologist parties such as the Syrian Social Nationalist Party (SSNP) utilized this fact to turn Alawites against the French and toward Arab nationalism.
Alawites after World War II
It was during the Second World War that the future of the Syrian state and its constituent parts were shaped. When war broke out in 1939, a new generation of Alawites proved more flexible in cooperating with Syrian nationalists, most of whom were Sunni urban elites.
With the formation of Vichy France in mid-1940, ultimate power and authority in Syria rested with the British, who favoured the creation of a unified independent Syria under the leadership of urban Sunni elite. Even though the Alawite territories belonged to independent Syria, historical mistrust between the Alawites and Sunnis made the transformation a lengthy and painful process.
After the war, Syria obtained independence in 1946, but entered into a period of political instability, unrest, and experimentation with pan-Arab connections to Egypt.
Once they recognized that their future lay within independent Syria, Alawites started to play an active role in two key institutions: the armed forces and political parties.
The Ba'ath party, founded in 1947 by several Muslim and Christian Arab politicians and intellectuals to integrate the ideologies of Arab nationalism, socialism, secularism, and anti-imperialism, was more attractive to Alawites than the Muslim Brotherhood, a Sunni conservative religious organization headquartered in Egypt with a large urban Sunni base in Syria.
Alawites and other minorities continued to be over-represented in the army due to two main factors. Middle-class Sunni families tended to despise the army as a profession, which, according to them, was the place for “the lazy, the rebellious, and the academically backward.” Alawites, on the other hand, saw the military as the main opportunity for a better life.
Second, many Alawites, who had been coping with dire economic circumstances, could not afford to pay the fee to exempt their children from military service.
The Alawite presence in the military culminated in a set of coups in the 1960s. The final coup was carried out by General Hafez al-Assad, himself an Alawite, and brought the Alawite minority to power in Syria in November 1970. In February 1971, Hafez al-Assad became the first Alawite President of Syria.
Alawites and the Assad Dynasty
Born to a relatively well-off Alawite family in a remote village located in north-western Syria, Hafez al-Assad joined the Ba'ath Party in 1946 and rose to the rank of de facto commander of the Syrian army by 1969. Sectarian solidarity has been a crucial component of Assad family rule from the beginning. He relied on the Alawite community to consolidate his power and to establish his dynasty.
In the early stages of his rule, Hafez al-Assad emphasized Syria's pan-Arab orientation that required him to embrace the majority Sunni population. In 1971, he reinstated the old presidential Islamic oath, lifted restrictions on Muslim institutions, and encouraged the construction of new mosques.
At the same time, however, Assad not only placed trusted Alawites in key positions of the regime's security apparatus, but he also improved their living conditions, long among the most degraded in the Arab world. While rural Alawites benefitted from infrastructure improvements such as electricity, water, new roads, and agricultural subsidies, a group of urban Alawites enjoyed employment opportunities in the army and the state bureaucracy.
Overall, Alawites felt a sense of pride that "one of their own" had raised himself to such a high position.
In the end, Assad was unable to win the allegiance of large sections of Sunni Muslim urban society, particularly the conservatives with connections to the Muslim Brotherhood. His failure to fully bridge the divide was not only related to the heterodox character of his faith and certain anti-Islamic policies he adopted, but also to policies that favoured his co-sectarians over the rest of the Syrian population.
The clashes between the Syrian Muslim Brotherhood and the president, who symbolized the Alawite minority, culminated in rebellions against the regime in late 1970s and early 1980s.
Simultaneously, language used by the Muslim Brotherhood and its supporters only served to magnify Alawite insecurity, lead Alawites to back the Assad regime, and exacerbate ongoing tension. For them, Alawites were kuffars (disbelievers).
The peak of this struggle was the battle in Hama in early February 1982, where Alawite (but also some Kurdish) troops killed around 30,000 Sunni civilians, effectively tying the fate of the Alawites to the Assad regime.
From that moment, politics in Syria have been dominated by sectarian divisions.
Sectarian insecurity among the Alawites—who believed that the fall of the regime could lead to revenge against their community following the events in Hama—led to a firm support for hereditary succession in Syrian government. An Alawite attendant at Hafiz al-Assad's funeral in 2000, therefore, did not hesitate to utter, “for us the most important [thing] is that the president should come from the Assad family.”
The Rule of Bashar al-Assad
Even though Bashar al-Assad’s inaugural slogan, “change through continuity,” was reassuring for Alawites, the same slogan was interpreted by the Sunni Syrian majority as an invitation to push for political change.
Bashar al-Assad's initial policies conveyed a message of economic and political reform, but his main strategy was redistributing the spoils of power among the loyal supporters of the regime and his family. These actions, rightfully called the "corporatization of corruption" by former Syrian vice president Abd al-Halim Khaddam, worked against not only the Sunni majority, but also many Alawites, who were left out of the small inner core that includes Bashar al-Assad, his brother, sister, brother-in-law, and cousins.
While the regime and its clients enjoyed unchecked power and wealth at the expense of the majority of Syrians, several instances of sectarian violence between Alawites and Sunnis erupted in Syria. The most recent of these outbreaks occurred in the summer of 2008. Bashar al-Assad used this violence as evidence to argue to Alawites that his authoritarian regime was the only protection for them from what he called Sunni fundamentalism and intolerance.
Moderate Alawites have challenged Assad’s fear-based justifications for his rule and many more liberal Alawites later joined the early protests against the Assad regime. They were much more concerned with Assad’s political oppression, corruption, nepotism, and economic troubles than with sectarian bonds.
The prospect of shattering the historical alliance between the Assad regime and Syria’s Alawites was a tantalizing opportunity for Sunni oppositional leaders.
With this goal in mind, former Syrian Muslim Brotherhood leader Ali Bayanouni reached out to the Alawites in 2006, stating: “The Alawites in Syria are part of the Syrian people and comprise many national factions … [The] present regime has tried to hide behind this community and mobilize it against Syrian society. But I believe that many Alawite elements oppose the regime, and there are Alawites who are being repressed. Therefore, I believe that all national forces and all components of the Syrian society, including the sons of the Alawite community, must participate in any future change operation in Syria.”
This statement differed dramatically from the antagonistic tone of previous Muslim Brotherhood statements about the heretical nature of the Alawite sect.
Bashar al-Assad, as a keen politician and skilled strategist, would not allow any type of rapprochement between his co-sectarians and the Sunni majority, which has been against the Assad regime for decades.
Beginning early in his reign, Bashar al-Assad not only began actively to emphasize his Alawite roots but also manipulated to his benefit an increasing trend among the Alawites of Syria: conversion to mainstream Shiite Islam. He followed policies of forging ties both with Alawites and Shiites in Syria as a conscious effort to transform the nature of the opposition, from a united front against his anti-democratic rule to sectarian conflict between the Sunnis and Shiites.
From Arab Spring to Civil War
The events of the Arab Spring destabilized Bashar al-Assad’s complicated efforts to balance and contain the forces opposed to his regime and emboldened these diverse challengers to stand together against him.
After protests began in Syria in January 2011, he quickly came to realize that the opposition movement was too powerful to control by turning yet again to the entrenched dependency between the Assad family and the Alawite minority.
As the regime used ever-increasing violence as its only recourse to suppress the opposition, Bashar al-Assad began to develop a new state policy to attract foreign support (especially from Iran and Hezbollah in Lebanon) to secure his regime not just as another authoritarian government whose popularity was in decline, but rather as a Shiite state entrenched in the region against neighbouring Sunni states, such as Saudi Arabia, Jordan, and Turkey.
Al-Assad began to position himself as a pious Shiite through public events, appearances, and organizations. And the main Shiite political and military organizations in the region, Hezbollah and Iran, decided to back up the Assad regime in very concrete ways. They sent much needed financial and military support and ideologically bolstered Bashar al-Assad's fight against the Sunni “terrorists.”
The past eighteen months have proved that Bashar al-Assad’s strategy is serving its purpose as the nature of the conflict has transitioned into sectarian violence between Iran- and Hezbollah-backed Shiites and Sunnis, some of whom are backed by al-Qaida.
As has occurred repeatedly in its history, religious affiliations matter more than any other allegiance in the Syrian political arena and, after an initial burst of opposition to the Assad government, Syria’s Alawites have remained generally supportive (if wary) of his regime.
Despite a general feeling emerging in many Alawite villages that the Assad regime no longer represents them—particularly after affiliating itself with orthodox Shiite actors of the region, who have been known for their hostility against heterodox branches of Shiite Islam, including the Alawites—there is still a great deal of political power to gain for Bashar al-Assad from exploiting the deep-seated Alawite insecurity against the Sunni majority.
The Assad regime has already proved its willingness to drag Alawites, the Syrian state, and even the region, down with it into a violent sectarian chaos if it continues to be challenged.
Nevertheless, there remains an opportunity—perhaps now only a hope—for Alawites and Sunnis to break free from this political deadlock and form a supra-sectarian opposition that triggered the movement two years ago.
Suggested Reading
Douwes, Dick, The Ottomans in Syria: A History of Justice and Oppression, London, 1997.
Faksh, Mahmud A. “The Alawi Community of Syria: A New Dominant Political Force.” Middle Eastern Studies, Vol. 20, No. 2 (Apr., 1984), pp. 133-153.
Fildis, Ayse Tekdal. “Roots of Alawite-Sunni Rivalry in Syria.”Middle East Policy, Vol. XIX, No. 2, Summer 2012.
Filiu, Jean-Pierre, The Arab Revolution: Ten Lessons from the Democratic Uprising, New York, 2011.
Gelvin, James L., Divided Loyalties: Nationalism and Mass Politics in Syria at the Close of Empire, Berkeley and Los Angeles, 1998.
_______., The Arab Uprisings: What Everyone Needs to Know, New York, 2012.
Hathaway, Jane (with contributions by Karl K. Barbir). The Arab Lands under Ottoman Rule, 1516-1800. Harlow: Pearson, 2008.
Heydemann, Steven, Authoritarianism in Syria: Institutions and Social Conflict, 1946-1970, Ithaca, 1999.
Khoury, Philip S., Syria and the French Mandate: The Politics of Arab Nationalism, 1920-1945, Princeton, 1987.
Lust-Okar, Ellen, Structuring Conflict in the Arab World: Incumbents, Opponents, and Institutions, New York, 2005.
Lesch, David, The New Lion of Damascus: Bashar al-Asad and Modern Syria, New Haven, 2005.
Middle East Watch, Syria Unmasked: The Suppression of Human Rights by the Asad Regime, New Haven, 1991.
Owen, Roger, State, Power and Politics in the Making of the Modern Middle East, New York, 1992.
Perthes, Volker, The Political Economy of Syria under Asad, London, 1995.
Pipes, Daniel. “The Alawi Capture of Power in Syria.” Middle Eastern Studies, Vol. 25, No. 4 (Oct., 1989), pp. 429-450.
Provence, Michael, The Great Syrian Revolt and the Rise of Arab Nationalism, Austin, 2005.
Roberts, David, The Baath and the Creation of Syria, New York, 1987.
Seale, Patrick, Asad of Syria: The Struggle for the Middle East, Berkeley and Los Angeles, 1989.
Sindawi, Khalid. “The Shiite Turn in Syria.” Current Trends in Islamist Ideology, Vol. 8, pp. 82-107.
van Dam, Nikolaos, The Struggle for Power in Syria: Politics and Society under Asad and the Ba’th Party, New York, 1996.
Wedeen, Lisa, Ambiguities of Domination: Politics, Rhetoric, and Symbols in Contemporary Syria, Chicago, 1999.
Source URL: http://origins.osu.edu/article/alawites-and-fate-syria
“Get Linked or Get Lost”: A History of the Internet
Posted by Nick Efstathiadis in #General History, 1900 - 2000, 2000-
For most people, the Internet arrived sometime between 1993-1995, but it did not just miraculously come into being. It is a long story of an almost accidental evolution of a government-funded military experiment named “ARPAnet” into today’s ubiquitous, commercial Web. Its evolution can be linked to ancient information-processing devises such as the abacus and travelling through the years to a series of rapid technological changes during the late nineteenth century--such as the typewriter, punch clock, cash register, and four-function calculators--that provided new ways of processing information. In the 1960s, a group of “anarchist hippies” from different organizations began realizing the potential which computer technology had for revolutionizing communication in ways that had not happened since the invention of the printing press over 500 years ago.
The ARPAnet
Worried that the U.S. was falling behind in terms of scientific achievement after the Soviet Union launched Sputnik on October 4, 1957, President Dwight D. Eisenhower approved the creation of the Advanced Research Projects Agency (ARPA). ARPA had as its state mission to keep the U.S. ahead of military rivals by pushing research projects that promised significant advances in defence-related fields. ARPA had several project offices that funded research in different areas depending on the changing priorities of the Department of Defense (Burman 2003). One of these divisions, the Information Processing Techniques Office (IPTO) headed by J.C.R Licklider of Bolt, Beranek, and Newman (BBN) became a major funder of computer science in the United States and the driving force behind research into areas such as graphics, artificial intelligence, and networking.
One of Licklider’s visions was to create an “intergalactic network” of computers and people. His influential 1960 paper “Man-Computing Symbiosis” was a revolutionary conceptual shift because it cast computers not as number-crunching machines but as an exciting new communication medium. After Licklider left ARPA for MIT, his successor, Robert Taylor, remained committed to Licklider’s “intergalactic network” because it allowed ARPA researchers from around the country to access various computers in different locations (Abbate 1999).
Laying the Foundation of the Internet
Paul Baran of the United States Air Force-backed Research and Development organization (RAND) would help Lickliders’ dream become a reality by creating what would become the foundation for the Internet. In 1962, Baran was commissioned by the Air Force to research a decentralized, survivable way to maintain control over its missiles in the case of a nuclear attack. Baran’s final proposal was what was called a “packet-switched network.”
Unlike the traditional network system which channelled information through one source to be processed and then routed somewhere else, packet switching essentially split large sections of data into little sections called “packets” that could be sent through different routes which all led to the same place. Upon arrival, information would be reassembled (Segaller 1998). Packet switching allowed dynamic rerouting--in other words, information could to be routed and rerouted quickly to any computer.
Baran talked to Robert Taylor of ARPRA who pushed the project forward by awarding what was called the “ARPAnet” contract to BBN. The team at BBN proposed that this network be composed of what was called Interface Message Processors (IMPs), or routers that were connected with modems that would process data packets. BBN chose Honeywell’s DDP-516 to build the first IMP and, in 1969, the very first rudimentary peer-to-peer network was established when BBN installed the first IMP at UCLA and succeeded in transmitting the “l” and the “o” in the word “login” to Stanford before the system crashed. Hence, the first message on the Internet was “lo” An hour later, they were successful in transmitting the full “login.”
Later that year, four major computers at major universities (UCLA, University of Utah, Stanford Research Institute, and UCSB) joined the network, with other universities soon following (Burman 2003). As the network grew, however, ARPAnet managers realized that the original system supported only client-server applications like Telnet and FTP (File Transfer Protocol) and couldn’t handle host-to-host connections. These limitations were overcome with the NCP, or Network Control Program, which allowed communications between different hosts running on the same network.
Becoming an ARPAnet user, however, was difficult. The first challenge for any potential user was getting access to the network. For a site to get an ARPAnet connection, someone would have to have a research contract with ARPA or had to pay the cost of setting up their node which, in 1972, might run anywhere between $55,000 and $107,000. When a site was approved, ARPA had to order a new IMP from Bolt, Beranek, and Newman, reconfigure the network to include the new node, and arrange with AT&T for a telephone link between the new node and the rest of the ARPAnet. Once a site was connected to ARPAnet, though, access to its controls was much looser (Abbate 1999). In theory, access within each site was to be limited to individuals doing work for ARPA, though few sites actually enforced that policy.
Many members of ARPAnet suspected that ARPA managers were aware that unsanctioned users were on the network and did not object. Unauthorized users who contributed improvements to the system were even tacitly encouraged. In fact, “science fiction lovers” mailing lists were apparently allowed to operate over ARPAnet provided they generated enough traffic to allow ARPA managers to observe the network’s behaviour under the load. Another unofficial but tolerated activity was Michael Hart’s Project Gutenberg, which made an effort to make historically significant documents available over the network. Hart was not an ARPA researcher but had acquired an account at the University of Illinois and began by posting the Declaration of Independence on his site’s computer in December of 1971. Project Gutenberg is still in operation on the Internet to this day (Abbate 1999).
Once on the network, users had access to some of the most advanced computer systems in the U.S., but using it was difficult or unappealing, and new sites were given little direction on how to get started. In addition, navigating what was available on the ARPAnet was difficult because the network search tools that Internet and World Wide Web (WWW) users would later take for granted did not exist. ARPAnet, however, would change forever with the creation of “net notes” (later “email”) created by Ray Tomlinson. Email quickly became the network's most popular and influential service, surpassing all expectations. Email was not included in the original blueprint for the network, and its success represented a radical shift in the ARPAnet’s identity and purpose. The network was originally built to provide access to computers rather than to people, but email and emailing created a deeper level of community among ARPAnet users (Murphy 2002).
In 1972, ARPAnet was successfully demonstrated to the International Conference on Computer Communications in Washington in the presence of AT&T and other international telephone companies. In July, 1974, the ARPAnet was transferred to the Defense Communication Agency as an operational network where it continued to perfect protocols and expand the ARPAnet to function internationally by satellite links. Over the course of the decade, the ARPAnet, which was a single network that connected a few dozen sites, would be transformed into the Internet, which was a system of many interconnected networks, capable of almost indefinite expansion (Burman 2003).
Designing the Internet
NCP, the first standard networking protocol of ARPA (now called DARPA: Defense Advanced Research Projects Agency), was rapidly becoming unable to accommodate growing network traffic. Vinton (“Vint”) Cerf and Robert Kahn created the Transmission Control Protocol/Internet Protocol (TCP/IP) in the mid-1970s, which added flexibility and sophistication to the network. In fact, the move from NCP to TCP/IP is considered by many people as the beginning of the Internet (the first use of the word “Internet” was in a 1974 paper by Cerf and Kahn on Transmission Control Protocol). TCP did more than just set up a connection between two hosts--it also controlled the rate of data flow between the hosts, compensated for errors by retransmitting lost or damaged packets, and verified the safe arrival of packets using acknowledgments (Segaller 1998).
Cerf and Kahn planned for TCP to replace NCP as the ARPANET’s host protocol and be the standard host protocol in every subsequent network built by ARPA. They also proposed splitting the TCP protocol into two separate parts: a host-to-host protocol and an internetwork protocol (IP), which would become known as TCP/IP. IP would pass individual packets between machines and TCP would be responsible for ordering these packets into reliable connections between pairs of hosts.
In March 1981, Major Joseph Haughney announced that all ARPAnet hosts would be required to implement TCP/IP in place of NCP by January of 1983. The replacement became a major ordeal and Dan Lynch, a computer systems manager, made up buttons that read “I Survived the TCP Transition.” After converting ARPAnet to TCP/IP, DARPA created a separate MILnet site equipped with encryption devices and other security measures to support their military functions while ARPAnet would continue to host civilian academic researchers. ARPAnet’s military roots would continue to be downplayed and, in 1987, the supervision of the Internet was transferred from the Department of Defense to the National Science Foundation (NSF) (Abbate 1999).
Launching the Internet
As the Internet grew, its backbone network, ARPAnet, was unable to keep up. ARPAnet managers and the NSF agreed to connect the ARPAnet sites to the NSF’s regional networks and have the NSFnet take over as the backbone of the Internet. NSFnet had higher-speed lines and faster switches, and it could handle more traffic. Since the NSF and DARPA were already operating their network services jointly, the merger would be relatively painless. During 1988 and 1989, various DARPA sites transferred their host connections from ARPAnet to NSFnet. On 28 February 1990, ARPAnet was formally decommissioned, the remaining hardware dismantled, and military operation of the Internet came to a close. Soon after the completion of NSFnet’s new and faster T1 lines, Internet traffic increased rapidly, and its T1 lines are usually considered the tool which opened the Internet to the world (Burman 2003).
In 1990, Tim Berners-Lee at the European Organization for Nuclear Research (CERN) developed the next phase of the development of the Internet, the vocabulary of the World Wide Web, while the debut of the first browser, Mosaic (whose developers founded Netscape), provided easy access to information dispersed through servers all over the world by means of the Web’s hyperlinks. By 1995, the Internet had grown into a new communications paradigm. It had started as a network offering file sharing, remote login, and resource sharing for a small group of scientists and had evolved into a global network accessible by anyone who had an ordinary telephone and a personal computer (Murphy 2002).
In 1995, the NSF privatized the Internet and contracted with several companies to carry most of its traffic. Today these companies, or Internet Service Providers (ISPs), include Verizon, AT&T, Qwest, and IBM. There are also smaller ISPs such as cable and DSL companies. Within these backbones are IXPs, or Internet Exchange Points, that allow networks to exchange data. For example, while Verizon and Sprint provide a part of the Internet’s backbone, they aren’t connected--they need an IXP to connect.
Currently, there are several organizations that oversee the Internet’s protocols and infrastructure to ensure that information from one computer to the next can be can be understood. These organizations include The Internet Society, The Internet Engineering Task Force, and the Internet Corporation for Assigned Names and Numbers (Burman 2003). But while the framework of the Internet is carefully designed and cared for, the content continues to be extremely democratic and free.
Future of the Internet
The Internet’s democratic nature has made it seen as needing careful regulation in countries such as China and India. However, as one scholar argues, the seemingly boundless freedom of the Internet is not guaranteed. Some scholars argue that the generative power of the Internet could be on a path to lockdown by Internet-entered products such as iPods, iPhones, Tivos which can’t be easily modified by anyone except vendors. Instead of using personal computers which can run any program from any source without approval from a third party, we are entering a world where centralized approval becomes necessary. These scholars continue to assert that the Internet needs to remain “Wikipedia-ean” in which there are no clear boundaries between users and creators. If the Internet is to continue as an innovative means of collaboration, it will need to preserve its legacy of adaptability and its democratic nature (Declan 2007).
Whatever its future holds, the Internet sits at the centre of virtually all media crossroads.
-- Posted January 12, 2009
References
Abbate, Janet. 1999. Inventing the Internet. Cambridge, MA: The MIT Press.
Burman, Edward. 2003. Shift!: The Unfolding Internet Hype, Hope, and History. West Sussex, England: John Wiley and Sons, Ltd.
McCullagh, Declan. November 28, 2007. “News.com Talk: The Future of the Internet and How to Stop It.” News.com.
Murphy, Brian Martin. 2002. “A Critical History of the Internet.” Critical Perspectives on the Internet. Ed. Greg Elmer. New York, NY: Rowman and Littlefield Publishers, Inc.
Segaller, Stephen. 1998. Nerds: A Brief History of the Internet. New York, NY: TV Books.
From fertiliser to Zyklon B: 100 years of the scientific discovery that brought life and death
Posted by Nick Efstathiadis in #General History, 1900 - 2000
Robin McKie, science editor The Observer, Sunday 3 November 2013
It's 100 years since Fritz Haber found a way to synthesise ammonia – helping to feed billions but also to kill millions, and contributing to the pollution of the planet
Fritz Haber in 1919. Photograph: Topical Press Agency/Getty Images
Several hundred scientists from across the globe will gather in Ludwigshafen, Germany, next week to discuss a simple topic: "A hundred years of the synthesis of ammonia." As titles go, it is scarcely a grabber. Yet the subject could hardly be of greater importance, for the gathering on 11 November will focus on the centenary of an industrial process that has transformed our planet and threatens to bring even greater, more dramatic changes over the next 100 years.
The ammonia process – which uses nitrogen from the atmosphere as its key ingredient – was invented by German chemist Fritz Haber to solve a problem that faced farmers across the globe. By the early 20th century they were running out of natural fertilisers for their crops. The Haber plant at Ludwigshafen, run by the chemical giant BASF, transformed that grim picture exactly 100 years ago – by churning out ammonia in industrial quantities for the first time, triggering a green revolution. Several billion people are alive today only because Haber found a way to turn atmospheric nitrogen into ammonia fertiliser. "Bread from air," ran the slogan that advertised his work at the time.
But there is another, far darker side to the history of the Haber process. By providing Germany with an industrial source of ammonia, the country was able to extend its fight in the first world war by more than a year, it is estimated. Britain's sea blockade would have ensured Germany quickly ran out of natural fertilisers for its crops. In addition, Germany would also have run out of nitrogen compounds, such as saltpetre, for its explosives. The Haber process met both demands. Trains, bursting with Haber-based explosives and scrawled with "Death to the French", were soon chugging to the front, lengthening the war and Europe's suffering.
"If you look at the impact of the Haber process on the planet, you can see that it has been greater than any other discovery or industrial process over the past 100 years," said Professor Mark Sutton, of Edinburgh University. "On the positive side, there are the billions of people who are alive today thanks to it. Without it, there would have been no food for them. On the other hand, there are all the environmental impacts that a soaring world population, sustained by Haber fertilisers, have had. In addition, there is the pollution triggered by the release of ammonia fertilisers into water supplies across the globe and into the atmosphere.
"And, for good measure, there have been all the deaths caused by explosives created from Haber-manufactured ingredients. These have reached more than 100m since Haber invented the process, according to one estimate. So we can see Haber's work has been a mixed blessing."
Bald and absurdly Teutonic in demeanour, Haber was an ardent German nationalist. He was happy his invention was used to make explosives and was a fervent advocate of gas weapons. As a result, on 22 April 1915 at Ypres, 400 tons of chlorine gas were released under his direction and sent sweeping in clouds over Allied troops. It was the world's first major chemical weapons attack. Around 6,000 men died. Haber later claimed asphyxiation was no worse than blowing a soldier's leg off and letting him bleed to death, but many others disagreed, including his wife, Clara, herself a chemist. A week after the Ypres attack, she took Haber's service revolver and shot herself, dying in the arms of Hermann, their only son.
In 1918 Haber was awarded the Nobel prize for chemistry, a decision greeted with widespread indignation. Many British, French and US diplomats and scientists refused to attend his award ceremony in Stockholm. After the rise of Hitler, Haber – who had become a rich industrialist – was expelled from Germany because he came from a Jewish family, and died in Switzerland in 1934.
The ironies that afflicted Haber's life continued in death. One of the most effective insecticides he had helped to develop was Zyklon B, which was subsequently used by the Nazis to murder more than a million people, including members of Haber's extended family, including children of his sisters and cousins.
Since then, the use of Haber's process – or more properly the Haber-Bosch process in acknowledgement of Carl Bosch's work in turning Haber's ideas into a practical industrial process – has expanded dramatically. Today more than 100m tonnes of nitrogen are taken from the atmosphere every year and converted into ammonia compounds, in Haber-Bosch plants. These are then spread over the surface of the Earth, turning arid land into fields of plenty. As a result, our planet has been able to feed and sustain an unprecedented number of people. In 1900 there were 1.6 billion people on Earth. There are now more than 7 billion. Most of the extra mouths have been fed on food sustained by the Haber-Bosch process.
It has been calculated that half the nitrogen atoms in our bodies come from a Haber factory, via its fertilisers and the food nourished by them. As the Canadian scientist Vaclav Smil has put it in his book Enriching the Earth, the Haber-Bosch process "has been of greater fundamental importance to the modern world than the airplane, nuclear energy, spaceflight or television".
This has come at a price, however. There is the sheer strain placed on the natural environment by the number of human beings now sustained by artificial fertilisers. In addition, there are problems caused by our ever increasing appetite for ammonium chemicals. Our bodies may accumulate nitrogen atoms from fertiliser plants, but far more of these atoms fail to make it into the food chain and are instead released into the environment. The result, in many areas, has been calamitous. Nitrogen fertilisers get washed into streams, rivers, lakes and coastal areas where they feed algae that spread in thick carpets over the waters, suffocating life below.
Then there is the atmospheric release of all the excess ammonia, says Sutton. "Ammonia is released into the air from fertilisers on farms and can then be deposited on natural habitats with very unwelcome consequences," he said. "Consider the sundew … It can grow in very harsh environments in this country because its sticky leaves allow it to catch insects, which provide it with nitrogen and other important compounds. But when ammonia from artificial fertilisers is dumped nearby other less hardy plants grow and crowd out the sundew."
Sutton believes that while the dangers of fossil fuels and greenhouse gases are well known today, those of the nitrogen cycle, which affects drinking water, contributes to air pollution and affects the health of large parts of the population, have gone unrecognised. "We need nitrogen compounds to sustain our food supply but we need to be much more careful how we use them. That is the real lesson of the Haber process centenary."
HABER'S BREAKTHROUGH
The atmosphere that we breathe is 78% nitrogen. However, it is in a relatively unreactive form and until the beginning of the 20th century, the only way to obtain nitrogen-rich chemicals – which make excellent fertilisers – was to use manure, in particular bird dung, or guano. At that time, guano was being imported, mainly from South America, in vast quantities to sustain European agriculture. However, Haber found a way to make ammonia, a nitrogen-based chemical, using hydrogen and atmospheric nitrogen. A mixture of the gases was heated in special high-pressure vessels, which produced small but significant quantities of ammonia. This process of turning inert atmospheric nitrogen into a chemically reactive form is known as nitrogen fixation.
The German chemical company BASF purchased Haber's process and asked Carl Bosch to scale it up to industrial level. Bosch was awarded a Nobel prize in 1931 for this work.
During the first world war, some of the synthetic ammonia from Haber plants was turned into nitric acid, which is a critical ingredient for explosives.
The Piri Reis Map, which is a genuine document, not a hoax of any kind, was made at Constantinople in 1513 CE. It focuses on the western coast of Africa, the eastern coast of South America, and the northern coast of Antarctica. Piri Reis could not have acquired his information on this latter region from contemporary explorers because Antarctica remained undiscovered until 1818 CE, more than 300 years after he drew the map. The ice-free coast of Queen Maud Land shown in the map is a colossal puzzle because the geological evidence confirms that the very latest date that it could have been surveyed and charted in an ice-free condition is 4000 BCE. It is not possible to pinpoint the earliest date that such a task could have been accomplished, but it seems that the Queen Maud Land littoral may have remained in a stable, unglaciated condition for at least 9,000 years before the spreading ice-cap swallowed it entirely. There is no civilization known to history that had the capacity or need to survey that coastline in the relevant period, i.e. between 13,000 BCE and 4000 BCE.
In other words, the true enigma of this 1513 map is not so much its inclusion of a continent that was not discovered until 1818 but rather its portrayal of part of the coastline of that continent under ice-free conditions that came to an end 6,000 years ago and that have not since recurred.
How can this be explained? Piri Reis obligingly gives us the answer in a series of notes written in his own hand on the map itself. Here he tells us that he was not responsible for the original surveying and cartography. On the contrary he honestly admits that his role was merely that of compiler and copyist and that his own map was derived from a large number of source maps. Some of these had been drawn by contemporary or near-contemporary explorers (including Christopher Columbus), who had by then reached South America and the Caribbean, but others were documents of great antiquity dating back to the fourth century BCE or earlier.
Piri Reis did not venture any suggestion as to the identity of the cartographers who had produced the earlier maps. In 1963, however, Professor Charles. H. Hapgood proposed a novel and thought-provoking solution to the problem. Some of the source maps that the Admiral had made use of, he argued, in particular those said to date back to the fourth century BCE, had themselves been based on even older sources, which in turn had been based on sources more ancient still originating in the farthest antiquity. There was, he asserted, irrefutable evidence, that the earth had been comprehensively mapped before 4000 BCE by a hitherto unknown and undiscovered civilization that had achieved a high level of technological advancement:
"It appears" [he concluded] "that accurate information has been passed down from people to people. It appears that the charts must have originated with a people unknown and they were passed on, perhaps by the Minoans and the Phoenicians, who were, for a thousand years and more, the greatest sailors of the ancient world. We have evidence that they were collected and studied in the great library of Alexandria [Egypt] and that compilations of them were made by the geographers who worked there."
From Alexandria, according to Hapgood's reconstruction, copies of these compilations and of some of the original source maps were transferred to other centers of learning -- notably Constantinople. Finally, when Constantinople was seized by the Venetians during the Fourth Crusade in 1204, the maps began to find their way into the hands of European sailors and adventurers:
"Most of these maps were of the Mediterranean and the Black Sea. But maps of other areas survived. These included maps of the Americas and maps of the Arctic and Antarctic Oceans. It becomes clear that the ancient voyagers traveled from pole to pole. Unbelievable as it may appear, the evidence nevertheless indicates that some ancient people explored Antarctica when its coasts were free of ice. It is clear, too, that they had an instrument of navigation for accurately determining longitudes that was far superior to anything possessed by the peoples of ancient, medieval or modern times until the second half of the 18th century.This evidence of a lost technology will support and give credence to many of the other hypotheses that have been brought forward of a lost civilization in remote times. Scholars have been able to dismiss most of that evidence as mere myth, but here we have evidence that cannot be dismissed. The evidence requires that all the other evidence that has been brought forward in the past should be reexamined with an open mind." [Hapgood]
Despite a ringing endorsement from Albert Einstein, as we shall see below, and despite the later admission of John Wright, President of the American Geographical Society that Hapgood had "posed hypotheses that cry aloud for further testing" no further scientific research has ever been undertaken into these anomalous early maps. Moreover, far from being applauded for making a serious new contribution to the debate about the antiquity of human civilization, Hapgood himself continued until his death to be cold-shouldered by the majority of his professional peers -- who couched their discussion of his work in what has accurately been described as "thick and unwarranted sarcasm, selecting trivia and factors not subject to verification as the bases for condemnation, seeking in this way to avoid the basic issues."
The late Charles Hapgood taught the history of science at Keene College, New Hampshire, USA. He wasn't a geologist, or an ancient historian. It is possible, however, that future generations will remember him as the man whose work undermined the foundations of world history - and a large chunk of world geology as well.
Albert Einstein was amongst the first to realize this when he took the unprecedented step of contributing the forward to a book that Hapgood wrote in 1953, some years before he began his investigation of the Piri Reis Map:
"I frequently receive communications from people who wish to consult me concerning their unpublished ideas," Einstein observed. "It goes without saying that these ideas are very seldom possessed of scientific validity. The very first communication, however, that I received from Mr. Hapgood electrified me. His idea is original, of great simplicity, and - if it continues to prove itself -- of great importance to everything that is related to the history of the earth's surface."
The "idea" expressed in Hapgood's 1953 book is a global geological theory which, together with many other anomalies of earth science, elegantly explains how and why large parts of Antarctica could have remained ice-free until 4000 BCE. In brief the argument is as follows:
- Antarctica was not always covered with ice and was, at one time, much warmer than it is today.
- It was warm because it was not physically located at the South Pole in that period. Instead it stood approximately 2,000 miles further to the north. This "would have put it outside the Antarctic Circle in a temperate or cold temperate climate."
- The continent moved to its present position inside the Antarctic Circle as a result of a mechanism known as earth-crust-displacement. This mechanism, in no sense to be confused with plate-tectonics or so-called continental drift, is one whereby the lithosphere, the whole outer crust of the earth: "may be displaced at times, moving over the soft inner body, much as the skin of an orange, if it were loose, might shift over the inner part of the orange all in one piece."
- "During the envisaged southwards movement of Antarctica brought about by earth-crust displacement, the continent would gradually have grown colder, an ice-cap forming and remorselessly expanding over several thousands of years until it at last attained its present dimensions."
Orthodox geologists, however, remain reluctant to accept Hapgood's theory (although none have succeeded in proving it incorrect). It raises many questions.
Of these by far the most important is the following: what conceivable mechanism would be able to exert sufficient thrust on the lithosphere to precipitate a phenomenon of such magnitude as a crustal displacement?
We have no better guide than Einstein to summarize Hapgood's findings in this respect:
"In a polar region there is continual deposition of ice, which is not symmetrically distributed about the pole. The earth's rotation acts on these unsymmetrically deposited masses, and produces centrifugal momentum that is transmitted to the rigid crust of the earth. The constantly increasing centrifugal momentum produced in this way will, when it has reached a certain point, produce a movement of the earth's crust over the rest of the earth's body..." (Einstein's foreword to "Earth's Shifting Crust" p. 1)
The Piri Reis Map seems to contain surprising collateral evidence in support of the thesis of a geologically recent glaciation of parts of Antarctica following a sudden southwards displacement of the earth's crust. Moreover since such a map could only have been drawn prior to 4000 BCE, its implications for the history of human civilization are staggering. Prior to 4000 BCE there are supposed to have been no civilizations at all.
At some risk of oversimplification, the academic consensus is broadly as follows:
- Civilization first developed in the "Fertile Crescent" of the Middle East.
- This development only began after 4000 BCE, and culminated in the emergence of the earliest true civilizations (Sumer and Egypt) only at around 3000 BCE, soon followed by the Indus Valley and China.
- About 1,500 years later, civilization took off spontaneously and independently in the Americas.
- Since 3000 BCE in the Old World (and about 1500 BCE in the New) civilization has steadily "evolved" in the direction of ever more refined, complex and productive forms.
- In consequence, and particularly by comparison with ourselves, all ancient civilizations, and all their works, are to be understood as essentially primitive (the Sumerian astronomers regarded the heavens with unscientific awe, and even the Pyramids of Egypt were built by "technological primitives").
The evidence of the Piri Reis map appears to upset all this.
In his day, Piri Reis was a well-known figure, and his historical identity is firmly established. An Admiral in the Navy of the Ottoman Turks, he was involved, often on the winning side, in numerous sea battles of the mid-16th century. He was, in addition, considered to be a great expert on the lands of the Mediterranean, and was the author of a famous sailing book, known as the Kitabi Bahriye, which provided a comprehensive description of the coasts, harbors, currents, shallows, landing places, bays and straits of the Aegean and Mediterranean Seas. Despite this illustrious career he somehow managed to fall foul of his masters and was eventually beheaded in 1554 or 1555.
The source maps that Piri Reis used to draw up his 1513 map were, in all probability, originally lodged in the Imperial Library at Constantinople, to which the Admiral is known to have enjoyed privileged access. Those sources (which may in turn have been transferred or copied from other even more ancient centers of learning) no longer exist, or, at any rate, have not yet been found. It was, however, in the library of the old Imperial Palace at Constantinople that the Piri Reis map was itself rediscovered, painted on a gazelle skin and rolled up on a dusty shelf, as recently as 1929.
As the baffled Lt. Colonel Ohlmeyer admitted in his letter to Hapgood in 1960, the Piri Reis map depicts the subglacial topography, the true profile of Queen Maud Land Antarctica beneath the ice. This profile remained completely hidden from view from 4000 BCE (when the advancing ice-sheet covered it) until it was at last revealed again as a result of the comprehensive seismic survey of Queen Maud Land that was carried out during 1949 by a joint British-Swedish scientific reconnaissance team.
If Piri Reis had been the only cartographer to have had access to such anomalous information then it would be wrong to place any great weight on his map. At the most one might say "Perhaps it is significant but, then again, perhaps it is just a coincidence."
The Turkish Admiral, however, was by no means alone in the possession of seemingly impossible and inexplicable geographical knowledge. It would be futile to speculate further than Hapgood himself has already done as to what manner of "underground stream" could have carried and preserved such knowledge through the ages, transmitting fragments of it from culture to culture and from epoch to epoch. Whatever the mechanism, the plain fact is that a number of other cartographers do seem to have been privy to the same curious secrets.
Is it possible that all these mapmakers could have partaken, perhaps unknowingly, in the bountiful scientific legacy of a vanished civilization?
The Piri Reis map contains more mysteries than just Antarctica.
Drawn in 1513, the map demonstrates an uncanny knowledge of South America -- and not only of its eastern coast but of the Andes Mountains on the western side of the continent, which were of course unknown at the time. The map correctly shows the Amazon River rising in these unexplored mountains and thence flowing eastward.
Itself compiled from more than twenty different source documents of varying antiquity, the Piri Reis Map depicts the Amazon not once but twice (most probably as a result of the unintentional overlapping of two of the source documents used by the Turkish admiral. In the first of these the Amazon’s course is shown down to its Para River mouth, but the important island of Marajo does not appear. According to Hapgood, this suggests that the relevant source map must have dated from a time, perhaps as much as 15,000 years ago, when the Para River was the main or only mouth of the Amazon and when Marajo Island was part of the mainland on the northern side of the river. The second depiction of the Amazon, on the other hand, DOES show Marajo (in fantastically accurate detail) despite the fact that this island was not discovered until 1543. Again, the possibility is raised of an unknown civilization which undertook continuous surveying and mapping operations of the changing face of the earth over a period of many thousands of years, with Piri Reis making use of earlier and later source maps left behind by this civilization.
Although they remained undiscovered until 1592, the Falkland Islands appear on the 1513 map at their correct latitude.
The library of ancient sources incorporated in the Piri Reis Map many also account for the fact that it convincingly portrays a large island in the Atlantic Ocean to the east of the South American coast where no such island now exists. Is it pure coincidence that this "imaginary" island turns out to be located right over the sub-oceanic Mid-Atlantic Ridge just north of the equator and 700 miles east of the coast of Brazil, where the tine Rocks of Sts. Peter and Paul now jut above the waves? Or was the relevant source map drawn deep in the last Ice Age, when sea levels were far lower than they are today and a large island could indeed have been exposed at this spot?
Captain Charles Upham VC & Bar - Telegraph
Posted by Nick Efstathiadis in #General History, 1900 - 2000
12:01AM GMT 23 Nov 1994
Captain Charles Upham, who has died aged 86, twice won the Victoria Cross.
Only three men have ever won double VCs, and the other two were medical officers: Col A Martin-Leake, who received the decoration in the Boer War and the First World War; and Capt N G Chavasse, who was killed in France in 1917. Chavasse's family was related to Upham's.
For all his remarkable exploits on the battlefield, Upham was a shy and modest man, embarrassed when asked about the actions he had been decorated for. "The military honours bestowed on me," he said, "are the property of the men of my unit."
In a television interview in 1983 he said he would have been happier not to have been awarded a VC at all, as it made people expect too much of him. "I don't want to be treated differently from any other bastard," he insisted.
When King George VI was conferring Upham's second VC he asked Maj-Gen Sir Howard Kippenberger, his commanding officer: "Does he deserve it?"
"In my respectful opinion, Sir," replied Kippenberger, "Upham won this VC several times over."
A great-great nephew of William Hazlitt, and the son of a British lawyer who practised in New Zealand, Charles Hazlitt Upham was born in Christchurch on Sept 21 1908.
Upham was educated at the Waihi Preparatory School, Christ's College and Canterbury Agricultural College, which he represented at rugby and rowing.
He then spent six years as a farm manager, musterer and shepherd, before becoming a government valuer in 1937.
In 1939 he volunteered for the 2nd New Zealand Expeditionary Force as a private in the 20th Battalion and became a sergeant in the first echelon advance party. Commissioned in 1940, he went on to serve in Greece, Crete and the Western Desert.
Upham won his first VC on Crete in May 1941, commanding a platoon in the battle for Maleme airfield. During the course of an advance of 3,000 yards his platoon was held up three times. Carrying a bag of grenades (his favourite weapon), Upham first attacked a German machine-gun nest, killing eight paratroopers, then destroyed another which had been set up in a house. Finally he crawled to within 15 yards of a Bofors anti-aircraft gun before knocking it out.
When the advance had been completed he helped carry a wounded man to safety in full view of the enemy, and then ran half a mile under fire to save a company from being cut off. Two Germans who tried to stop him were killed.
The next day Upham was wounded in the shoulder by a mortar burst and hit in the foot by a bullet. Undeterred, he continued fighting and, with his arm in a sling, hobbled about in the open to draw enemy fire and enable their gun positions to be spotted.
With his unwounded arm he propped his rifle in the fork of a tree and killed two approaching Germans; the second was so close that he fell on the muzzle of Upham's rifle.
During the retreat from Crete, Upham succumbed to dysentery and could not eat properly. The effect of this and his wounds made him look like a walking skeleton, his commanding officer noted. Nevertheless he found the strength to climb the side of a 600 ft deep ravine and use a Bren gun on a group of advancing Germans.
At a range of 500 yards he killed 22 out of 50. His subsequent VC citation recorded that he had "performed a series of remarkable exploits, showing outstanding leadership, tactical skill and utter indifference to danger". Even under the hottest fire, Upham never wore a steel helmet, explaining that he could never find one to fit him.
His second VC was earned on July 15 1942, when the New Zealanders were concluding a desperate defence of the Ruweisat ridge in the 1st Battle of Alamein. Upham ran forward through a position swept by machine-gun fire and lobbed grenades into a truck full of German soldiers.
When it became urgently necessary to take information to advance units which had become separated, Upham took a Jeep on which a captured German machine-gun was mounted and drove it through the enemy position.
At one point the vehicle became bogged down in the sand, so Upham coolly ordered some nearby Italian soldiers to push it free. Though they were somewhat surprised to be given an order by one of the enemy, Upham's expression left them in no doubt that he should be obeyed.
By now Upham had been wounded, but not badly enough to prevent him leading an attack on an enemy strong-point, all the occupants of which were then bayoneted. He was shot in the elbow, and his arm was broken. The New Zealanders were surrounded and outnumbered, but Upham carried on directing fire until he was wounded in the legs and could no longer walk.
Taken prisoner, he proved such a difficult customer that in 1944 he was confined in Colditz Castle, where he remained for the rest of the war. His comments on Germans were always sulphurous.
For his actions at Ruweisat he was awarded a Bar to his VC. His citation noted that "his complete indifference to danger and his personal bravery have become a byword in the whole of the New Zealand Expeditionary Force".
After his release from Colditz in 1945 Upham went to England and inquired about the whereabouts of one Mary ("Molly") McTamney, from Dunedin. Told that she was a Red Cross nurse in Germany, he was prepared, for her sake, to return to that detested country. In the event she came to England, where they were married in June 1945.
Back in New Zealand, Upham resisted invitations to take up politics. In appreciation of his heroism the sum of £10,000 was raised to buy him a farm. He appreciated the tribute, but declined the money, which was used to endow the Charles Upham Scholarship Fund to send sons of ex-servicemen to university.
Fiercely determined to avoid all publicity, Upham at first refused to return to Britain for a victory parade in 1946, and only acceded at the request of New Zealand's Prime Minister.
Four years later he resisted even the Prime Minister's persuasion that he should go to Greece to attend the opening of a memorial for the Australians and New Zealanders who had died there – although he eventually went at Kippenberger's request.
In 1946, Upham bought a farm at Rafa Downs, some 100 miles north of Christchurch beneath the Kaikoura Mountains, where he had worked before the war. There he found the anonymity he desired.
In 1962, he was persuaded to denounce the British government's attempt to enter the Common Market: "Britain will gradually be pulled down and down," Upham admonished, "and the whole English way of life will be in danger." He reiterated the point in 1971: "Your politicians have made money their god, but what they are buying is disaster."
He added: "They'll cheat you yet, those Germans."
Upham and his wife had three daughters, including twins.
Published November 23 1994
The history of Southeast Asia has been characterized as interaction between regional players and foreign powers. Though 11 countries currently make up the region, the history of each country is intertwined with all the others. For instance, the Malay empires of Srivijaya and Malacca covered modern day Indonesia, Malaysia, and Singapore while the Burmese, Thai, and Khmer peoples governed much of Indochina. At the same time, opportunities and threats from the east and the west shaped the direction of Southeast Asia. The history of the countries within the region only started to develop independently of each other after European colonialisation was at full steam between the 17th and the 20th century.
Introduction
Evidences suggest that the earliest non-aboriginal Southeast Asians came from southern China and were Austronesian speakers. Contemporary research by anthropologists, linguists (and archaeologists suggests that the inhabitants of the Malay Archipelago migrated from southern China to islands of the Philippines around 2,500 BC and later spread to modern day Malaysia and Indonesia.
The earliest population of Southeast Asia was animist before Hinduism and Buddhism were exported from the Indian subcontinent. Islam arrived mostly through Indian Muslims and later dominated much of the archipelago around the 13th century while Christianity came along when European colonization started around the 16th century. During the classical age, the existence of Southeast Asia had been known to the Greeks. The Greek astronomer Ptolemy in his Geographia named the Malay Peninsula as Aurea Chersonesus (Golden Peninsula) while Java was called Labadius. Labadius was probably a corruption of Sanskrit Yavadvipa which refers to the same island. An ancient Hindu text may have earlier referred to Southeast Asia as Suvarnabhumi which means land of gold.
The region has been an important source of spices and this was one of the reasons European explorers were attracted to the Far East. During the colonization period, states of the region became important assets to the British, the Dutch and the French. British Malaya for instance was the world’s largest producer of tin and rubber while the Dutch East Indies was the source of Dutch’s wealth.
During the 1990s, Southeast Asia emerged as the fastest growing economy in the world. Its successes have caused some to call Southeast Asia an economic miracle and Singapore one of the "Four Asian Tigers". Though the Asian Financial Crisis struck in the late 1990s and left many crippled, the economy of the region has started to pick up again at a more sustainable rate as demand from the United States and People’s Republic of China soar.
Ancient and classical kingdoms
Southeast Asia has been inhabited since prehistoric times. The communities in the region evolved to form complex cultures with varying degrees of influence from India and China.
The ancient kingdoms can be grouped into two distinct categories. The first is agrarian kingdoms. Agrarian kingdoms had agriculture as the main economic activity. Most agrarian states were located in mainland Southeast Asia. Examples are the Ayutthaya Kingdom, based on the Chao Phraya River delta and the Khmer Empire on the Tonle Sap. The second type is maritime states. Maritime states were dependent on sea trade. Malacca and Srivijaya were maritime states.
A succession of trading systems dominated the trade between China and India. First, goods were shipped through Funan to the Isthmus of Kra, portaged across the narrow, and then transhipped for India and points west. Around the sixth century, BC merchants began sailing to Srivijaya where goods were transhipped directly. The limits of technology and contrary winds during parts of the year made it difficult for the ships of the time to proceed directly from the Indian Ocean to the South China Sea. The third system involved direct trade between the Indian and Chinese coasts.
Very little is known about Southeast Asian religious beliefs and practices before the advent of Indian merchants and religious influences from the second century BC onwards. Prior to the 13th century, Buddhism and Hinduism were the main religions in Southeast Asia.
The first dominant power to arise in the archipelago was Srivijaya in Sumatra. From the fifth century BC, the capital, Palembang, became a major seaport and functioned as an entrepot on the Spice Route between India and China. Srivijaya was also a notable centre of Vajrayana Buddhist learning and influence. Srivijaya’s wealth and influence faded when changes in nautical technology in the tenth century CE enabled Chinese and Indian merchants to ship cargo directly between their countries and also enabled the Chola state in southern India to carry out a series of destructive attacks on Srivijaya’s possessions, ending Palembang’s entrepot function.
In the Philippines, the Laguna Copperplate Inscription dating from 900 BC relates a granted debt from a Maharlika caste nobleman named Namwaran who lived in the Manila area. This document shows strong Srivijayan influence, and mentions a leader of Medan, Sumatra.
Java was dominated by a kaleidoscope of competing agrarian kingdoms including the Sailendras, Mataram,Singosari, and finally Majapahit.
European colonization
Europeans first came to Southeast Asia in the sixteenth century. It was the lure of trade that brought Europeans to Southeast Asia while missionaries also tagged along the ships as they hoped to spread Christianity into the region.
Portugal was the first European power to establish a bridgehead into the lucrative Southeast Asia trade route with the conquest of the Sultanate of Malacca in 1511. The Netherlands and Spain followed and soon superseded Portugal as the main European powers in the region. The Dutch took over Malacca from the Portuguese in 1641 while Spain began to colonize the Philippines (named after Phillip II of Spain) from 1560s. Acting through the Dutch East India Company, the Dutch established the city of Batavia (now Jakarta) as a base for trading and expansion into the other parts of Java and the surrounding territory.
Britain, in the form of the British East India Company, came relatively late onto the scene. Starting with Penang, the British began to expand their Southeast Asian empire. They also temporarily possessed Dutch territories during the Napoleonic Wars, In 1819 Stamford Raffles established Singapore as a key trading post for Britain in their rivalry with the Dutch. However, their rivalry cooled in 1824 when an Anglo-Dutch treaty demarcated their respective interests in Southeast Asia. From the 1850s onwards, the pace of colonization shifted to a significantly higher gear.
This phenomenon, denoted New Imperialism, saw the conquest of nearly all Southeast Asian territories by the colonial powers. The Dutch East India Company and British East India Company were dissolved by their respective governments, who took over the direct administration of the colonies. Only Thailand was spared the experience of foreign rule, although, Thailand itself was also greatly affected by the power politics of the Western powers.
By 1913, the British occupied Burma, Malaya and the Borneo territories, the French controlled Indochina, the Dutch ruled the Netherlands East Indies while Portugal managed to hold on to Portuguese Timor. In the Philippines, Filipino revolutionaries declared independence from Spain in 1898 but was handed over to the United States despite protests as a result of the Spanish-American War.
Colonial rule had a profound effect on Southeast Asia. While the colonial powers profited much from the region’s vast resources and large market, colonial rule did develop the region to a varying extent. Commercial agriculture, mining and an export based economy developed rapidly during this period. Increased labour demand resulted in mass immigration, especially from British India and China, which brought about massive demographic change. The institutions for a modern nation state like a state bureaucracy, courts of law, print media and to a smaller extent, modern education, sowed the seeds of the fledgling nationalist movements in the colonial territories. In the inter-war years, these nationalist movements grew and often clashed with the colonial authorities when they demanded self-determination.
Japanese colonization
During World War II, the region was invaded by the Japanese Imperial Army and included in the Greater East Asia Co-Prosperity Sphere. Thailand was the only country to maintain a nominal independence by making a political and military alliance with the Empire of Japan.
Decolonization
With the rejuvenated nationalist movements in wait, the Europeans returned to a very different Southeast Asia after World War II. Indonesia declared independence in 17 August 1945 and subsequently fought a bitter war against the returning Dutch; the Philippines were granted independence in 1946. Burma secured their independence from Britain in 1948, and the French were driven from Indochina in 1954 after a bitterly fought war against the Vietnamese nationalists. The newly-established United Nations provided a forum both for nationalist demands and for the newly demanded independent nations.
During the Cold War, countering the threat of communism was a major theme in the decolonization process. After suppressing the communist insurrection during the Malayan Emergency from 1948 to 1960, Britain granted independence to Malaya and later, Singapore, Sabah and Sarawak in 1957 and 1963 respectively within the framework of the Federation of Malaysia. In one of the most bloody single incidents of violence in Cold War Southeast Asia, General Suharto seized power in Indonesia in 1965 and initiated a massacre of approximately 500,000 alleged members of the Indonesian Communist Party (PKI).
The United States intervention against communist forces in Indochina during a conflict commonly referred to in the United States as the Vietnam War meant that Vietnam, Laos and Cambodia had to go through a prolonged and protracted war in their route to independence.
In 1975, Portuguese rule ended in East Timor. However, independence was short-lived as Indonesia annexed the territory soon after. Finally, Britain ended its protectorate of the Sultanate of Brunei in 1984, marking the end of European rule in Southeast Asia.
Contemporary Southeast Asia
Modern Southeast Asia has been characterized by high economic growth by most countries and closer regional integration. Indonesia, Malaysia, the Philippines, Singapore and Thailand have traditionally experienced high growth and are commonly recognized as the more developed countries of the region. As of late, Vietnam too had been experiencing an economic boom. However, Myanmar, Cambodia, Laos and the newly independent East Timor are still lagging economically.
On August 8, 1967, Association of Southeast Asian Nations (ASEAN) was founded by Thailand, Indonesia, Malaysia, Singapore, and the Philippines. Since Cambodian admission into the union in 1999, East Timor is the only Southeast Asian country that is not part of ASEAN, although plans are under way for eventual membership. The association aims to enhance cooperation among Southeast Asian community. ASEAN Free Trade Area has been established to encourage greater trade among ASEAN members. ASEAN has also been a front runner in greater integration of Asia-Pacific region through East Asia Summits.