Skip to main content Skip to navigation
Clif Stratton – Summer 2017 History 305

How the Cold War Spawned the North Korean Dictatorship

Download PDF

The Democratic People’s Republic of Korea, aka DPRK, aka North Korea, is located in east Asia on the northern half of the Korean peninsula. It is a country some refer to as a “Hermit Kingdom”, based upon its attempts at self sufficiency, communist totalitarian government, and the lengths its regime goes to in order to shut in its people and block-out outside influences. Today North Korea and its estimated 24.9 million in population is lead by Kim Jong-Un, a totalitarian dictator with nominal communist tendencies. For many years now the North Korean government has systematically starved, tortured, brutalized, and brainwashed its citizens into utter submission and it does not take kindly to any sort of resistance from anyone under its rule. The North Korean people have little to no access to medical care, electricity, nutritional foods, and if the North Korean Regime even suspects the slightest infringements of insubordination by any of its citizens, they can be easily found guilty and sentenced to many years of forced labor in their labor camps, (which have been compared to the Holocaust camps), or even publicly executed [1]. This regime has a general disgust for any democracy and humanitarian ideologies and does not back down to any global factions, including the U.N. (United Nations Committee.) In 2014 the U.N. made a declaration of trying the North Korean regime with crimes against humanity, but with the global balances afraid of exasperating an already difficult line between peace and war, nothing has yet come of these declarations [2]. With the way things are sitting right now in east Asia begs the question, how did the balance of power end up this way for North Korea? And how did the cold war contribute to North Korea’s increasing international isolation and totalitarianism?

The North Korean regime was once a force to be reckoned with. It was a premier strong-arm for the progressing Soviet Union and its communist-socialist politics, as well as a replicator of it. When the Soviet Union collapsed, it sank the North Korean economy and put a vicious strain on its people. North Korea’s progress has been stunted and without the might of a super-power to facilitate its strength, North Korea struggles to co-exist with a globalizing world. In the 1900’s relations between Japan and Russia/China were a bit sour as both Japan felt entitled to the Korean peninsula, but so did Russia/China. After Japan perpetrated a surprise attack against Russia’s port Arthur in 1904 and having success, Japan garnered a higher reputation from Russia/China as one of a strong military Imperial status. Roosevelt and the Americas gave their blessing to Japan’s desire for Imperial rule over Korea in 1905, just as long as Japan would recognize the Philippine’s as a colony belonging to the British and the Americas. Japan then proceeded to take a hostile and malicious colonization and occupation of the Korean peninsula between the years of 1910-1945. Japan’s soul purpose in these 35 years was to completely remove and replace the historical identity of the Korean Choson Dynasty that was built upon Confucian principles for many centuries, to being one of all things Japan. Japan was seen by the West as an enlightener of Korea. Around the 1920’s Korean’s began to resist Japan’s harsh, degrading, Imperial rule over them which resulted in the forming of a Korean guerrilla resistance force that fought the Japanese. The Korean guerrilla fighters were given support by communist Russia and China. One of these Korean guerrilla fighters turned out to be Kim Il Sung. Accounts of Kim Il Sung were and still are widely disputed between the divided Korea’s, with the North claiming Kim single-handedly defeated the Japan Imperialist, and the South claiming Kim was just an imposter who stole the name of a revered Korean guerrilla patriot [3].

  Figure 1. This is Kim il Sung who was the dictator of North Korea (DPRK) from 1945, after WW2 ended, until his death in 1994.

Kim Il Sung was born on April 12th, 1912  (the same day the Titanic sank.) He joined the communist guerrilla resistance force against Japan in the early 1930’s. Kim Il Sung is believed to have been a junior officer for the Chinese/Soviet command until 1945 when WW2 ended. After the war the Eastern communist super-power needed an overseer of the northern half of Korea that they laid claim to. Kim Il Sung was selected by Joseph Stalin to manage the northern half, while the southern half held elections for their leadership and eventually elected Syngman Rhee. The reality of the situation was that Kim was to be a Soviet puppet, and ended up being that between 1945-1950. All of Kim’s leadership and power was just a direct extension of Stalin’s dictations. During this time, both Korea’s claimed authority, divinity, and legitimate right over either side which created friction. Not only was this friction felt between the DPRK and the ROK, but it was also felt by the Chinese/Soviet governments as well as America and the United Nations. Kim Il Sung and the DPRK put much pressure on Stalin for permission to invade South Korea and liberate it from the U.S. Imperialists and their puppet Syngman Rhee, but Stalin refused because he did not want to provoke a confrontation with the U.S. In late 1949 after the USSR created and successfully tested their first atom bomb while consolidating friendly communist power over China, Stalin now had more confidence in his military might, and in early 1950 began planning an invasion of the ROK with Kim Il Sung [4].

    Figure 2. This is Joseph Stalin who was the dictator of the Soviet Union (USSR) from 1929 until his death in 1953.

The Korean War, which officially was announced/commenced on June 25th 1950, was stated to the American public in such a manner, ” The Soviet puppets in North Korea have set the match to the powder train.” With this metaphorical match being lit acknowledged the transgressions on the 38th parallel, (which was the global latitude line that separated North from South Korea,) which was established originally by American forces to push out the surrendering Japanese forces that once held military occupation in the Korea’s. The front line distinction between the North and South Korea’s (38th parallel) became the precursor to the North Korean soviet-red army supplied, allied, forces power move over the the newly created front-line into the democracy of South Korea (ROK). With this clear action of war the United States frantically gathered an immediate meeting with the United Nations to contend to the soviet military action upon South Korea. With the absence of Russia to those meetings the United Nations disapproved of soviet engagements past the 38th parallel and the Korean War began [5].

Bruce Cumings stated in his book, “North Korea: Another Country”, “For the Truman cold war liberal, Korea was a success, the limited war. For the MacArthur conservative, Korea was a failure: the first defeat in American history, more properly a stalemate, and in any case the result proved that there was no substitute for victory.” During the start of the Korean War the American GI’s were told that as soon as the North Koreans saw the whites of yankee eyes, they would turn tail and run. The reality was far more different as the North Korean fighters were completely underestimated by President Truman and General MacArthur. The North Korean forces were radically fanatical soldiers committed to the cause of fighting for their beloved homeland, their Korea. Once American soldiers started getting pushed back towards South Korea by the North Korean and Chinese infantry, General MacArthur turned towards an air superiority fight and ordered the destruction of every means of communication, every factory installation, and every North Korean city and village burned to the ground with napalm and B-29 incendiary bombs. The U.S. had used more napalm in the Korean War than it had in the Vietnam War, and its heavy use was down-played as only American strategic battle victories were in vogue at the time; with no attention being paid to how the battles were being won. Between 1950-1953 over 54,000 American GI’s were killed, over three million North Korean soldiers were killed, over one million South Korean soldiers were killed, and nearly one million Chinese soldiers were killed. The great historical sadness of the Korean War was how it turned the Korean people against each-other creating a whole other dynamic to this war which resulted in unknown/untold amounts of innocent civilians being slaughtered in the paramount paranoia of its madness. In the Korean War the divided Korea’s were a hot-bed for civilian atrocities and deaths as suspicions of who was working with the enemy and who was a foe, drove many Korean and American soldiers completely mad, and death became the calming of fears [6].

As U.S. president Harry S. Truman proclaimed the Korean War thirty-seven months in the previous, so president Dwight D. Eisenhower proclaimed its end with the signing of the Armistice Treaty on July 27th, 1953. With the signing of the Armistice treaty by all nations involved with the Korean War conflict, brought about a peaceful cease-fire and an end to the brutal and futile battle between the soviet-socialized factions of the northern part of Korea and the U.S./U.N. democratize- backed part of southern Korea. Though the war ended on the fighting front which was once called (the 38th parallel), it was reinstated and renamed to, the Demilitarized Zone (DMZ). President Eisenhower on a public broadcasting on radio and on television warned that the war on a political and diplomatic scale was far from over. Eisenhower reassured the U.S industrial defense-complex and its government labor-equipment funded markets, that it would not be lessened or going away anytime soon as the Korean conflict was still a problem in non-war engagement and that the U.S. and the United Nations needed to be cautious and vigilant towards its potential soviet aggressors [7]. With the Cold War already in effect the stakes between soviet based aggression’s and the U.S./U.N. declarations against it, things in North Korea on the political/economic scale began to weigh in, and heavy lies the crown.

  Figure 3. The separating border (38th Parallel) between North and South Korea established in July of 1945. 

The stages of North Korea’s social, political, cultural, and economical transitions of advancement started in 1945-1948 with the people’s democratic reform. Then after the devastation and destruction North Korea suffered in the 1950-1953 Korean war, another advancement of the DPRK began in 1953-1956 with the socialist reform. And then in 1957-1960 began the socialist revolution which built up to the 1961-1970 socialist construction. Kim Il Sung’s approach to political, social, cultural, and economic reforms to North Korea were primarily based on the theories of Marxism and Leninism. Clashing the two theories together branded Kim Il Sung’s theory/style of leadership and socialist construction to the creation of Kim’s Juche ideology (self-identity), (self- reliance). The North Korean people were known to give Kim Il Sung way to much credit due to certain enduring factors the DPRK went through, making Kim out to be some kind of rock-star. The (National Liberation Revolution 1931-1945) was the term used to explain Kim’s struggles and defeat of the hostile Japanese Imperialist rule, while the (People’s Democratic Revolution 1945-1972) was the term used in crediting Kim with his tactics of removing American occupation out of North Korea and keeping it out. These circumstances of dual revolutions and their prevailing champion Kim Il Sung, created the perception to the North Korean people that Marxism-Leninism were the contributing forces behind the DPRK’s mass historical liberation and it was all thanks to the leadership of Kim Il Sung [8].

The support that the Soviet Union and China gave to North Korea in its 25 years of postwar reconstruction was also quiet pivotal in its rising status. The implementation of Kim Il Sung’s ideological liberation of North Korea brought back 90% of its industrialized factorizes, and property/land ownership to the hands of the people and the state of North Korea by 1949. This economic and property reform was greatly encouraging to the DPRK and gave Kim Il Sung a good enterprising image which helped uplift his status of being the Great Leader. The Korean War destroyed all these gains and in its postwar years of 1953-1956 Kim Il Sung, along with his communist allies, once again brought industrial production and economic growth back to the DPRK and continued to produce some of its best know years of economic gains. These gains continued well into the 1970’s and during the North Korean socialist revolution and socialist construction days, included the developments of technological industrial advancements along with a rising agricultural and educational improvement. Once again, Kim Il Sung was credited by the North Korean people with these reforms with his style of leadership, impeccable representatives, and his continual guidance that raised the status of living in the DPRK to one of never before seen historical level. If it weren’t for the Soviet Unions economic support North Korea’s growth would have been almost non-existent [9].

Since the 1953 Armistice the unification of Korea has been a topic of noble pursuits. From 1953-1970 the divided Korea’s were very weary and distrustful of each-other; rather instead content on spending those years rebuilding their own countries and domestic economic prowess. In August of 1971 Kim Il Sung publicly declared that he was ready to have peaceful talks with the South Korean leadership/diplomats anytime and anywhere. Following this gesture in March of 1972 leaders/diplomats from both sides of the Korea’s began talks taking place in both Pyongyang and in Seoul. Unfortunately the two very different government factions could not come to any marginal agreements on the unification of Korea, but they both still continued with public peace talks well into the 1980’s. Currently both sides still have enormous amounts of distrust and despondent positions towards one-another [10].

The Cold War was by all accounts very beneficial to the developing North Korean regime as it received aid from the (Soviet-led council for Mutual Economic Assistance) program. By being a communist-socialist-totalitarian state, it made a consistent and obedient ally for the Soviet Union; while at the same time Russia had become one of the biggest exporters of North Korean production goods, so the DPRK’s economy relied heavily on the Soviet Union’s foreign relationship. In the 1980’s the DPRK’s economic growth had begun to spiral downward. Then in 1991 when the Soviet Union collapsed it had to turn its back on North Korea and as a result their economy collapsed and wide-spread famine hit the DPRK. The 1990’s berated the North Korean regime with many problems such as: Their industrial sector became old and outdated, the once good-agricultural producing land had also become old and tattered while subsisting highly on large quantities of expensive fertilizers. And all the while the North Korean government turning a blind eye to the deteriorating civilian sector, spending a majority of their countries profits on the military and elite sectors of its regime. Eventually the North Korean won (money) had become worth-less and the peasant society had to turn to illegal black-market activities based on the U.S. dollar currency. Soon almost all North Korean restaurants, hotels, businesses, and embassy’s were running on U.S. dollar currencies as well as Chinese Yuan’s. The North Korean black-market has became one of the biggest sources of income for its poor working class citizens [11].

Kim Il Sung died on July 8th, 1994 at the age of 82 of an apparent heart-attack. The entire North Korean population publicly wept hysterically for several days as its great liberator was laid to rest. His son Kim Jong Il toke his position of power as leader of North Korea. Kim Jong-Il is responsible for making the communist state a nuclear one and further antagonized its military supremacy while his people were starving in the streets [12]. Kim Jong-Il died on December 17th, 2011 at the age of 69 after having a heart-attack similarly like his father. Once again the entire North Korean population publicly wept hysterically while he was being put to rest. Speculation stated that Kim Jong-Il suffered a stroke in 2008 and began prompting his youngest son Kim Jong-Un as successor to the seat of power and fate to the North Korean regime [13]. Kim Jong-Un was put into power immediately after his fathers death and is called “The Great Successor.” The Kim Dynasty started in a time when it could thrive, but now that time is over. It would appear that Kim Jong-Un has invested and leveraged himself with North Korea’s military-elite complex just as his father did before him. The Great Successor has inherited his own kingdom, and is still enforcing it through means of oppression and intimidation. Let the world keep a limelight on the North Korean regime, and an open heart towards its all to often struggling citizens.


[1].    Park, Yeonmi, and Thor Halvossen. “Focus on the Suffering of North Koreans.” The Wall Street Journal Asia; Hong Kong, 10 May 2017. Web. 30 June 2017.

[2]. “Focus on the Suffering of North Koreans” Published by: The Wall Street Journal Asia; Hong Kong Date Published: 10 May, 2017.

[3]. Cumings, Bruce. Korea’s Place In The Sun: Updated Edition. New York: W.W. Norton, 2005. Print.

[4]. Lankov, Andrei. The Real North Korea “Life and Politics in the Failed Stalinist Utopia”. New York: Oxford UP, 2013. Print.

[5]. WAR IN KOREA.” New York Times, 26 June 1950. Web.

[6]. Cumings, Bruce. North Korea: Another Country. New York: W.W. Norton, 2004. Print.

[7]. Sass, Fred J. “PRESIDENT IS HAPPY: But Warns in Broadcast That Global Peace Is Yet to Be Achieved EISENHOWER URGES U.S. STAY ON GUARD.” New York Times, 27 July 1953. Web.>.

[8]. Lipyong, Kim J. “Communist Politics in North Korea.” New York: Praeger, 1975. Print.

[9]. Lipyong, Kim J. ” Communist Politics in North Korea.”

[10]. French, Paul. North Korea: The Paranoid Peninsula – A Modern History. London and New York: Zed, 2007. Print.

[11]. Cullinane, Susannah. “How Does North Korea Make Its Money?” CNN. Cable News Network, 09 Apr. 2013. Web.

[12]. Reid, T R. “North Korean President Kim Il Sung Dies at 82: [FINAL Edition].” The Washington Post, 09 July 1994. Web.>.

[13]. Brown, Kerry. “Kim Jong-il Obituary.” The Guardian. Guardian News and Media, 18 Dec. 2011. Web.







The Origins of Irelands ‘Terrorism’ against the Great Britain

Download PDF

Title: The Origins of Irelands ‘Terrorism’ against the Great Britain

In 2016 Security officials in Ireland searched the woodlands in rural south Devon for bombs or weapons that may have been smuggled from arms stores in Ulster. The officials worked on stopping Irish terrorists from staging or performing attacks against Britain or Northern Ireland, and they believed a current serving member of the Royal Marines was a part of it. [2] The man was arrested for questioning and holding at the end of August of 2016 hoping they were able to bring an end to yet another terrorist attack. Ireland especially the northern part of the country has experienced terrorist attacks or suspicious ones since the early 1900’s, but luckily officials have always been able to protect it. In the middle of 2016 they yet again stopped another attack when bombs and assault rifles along with ammunition were found in the county of Antrim. After arresting Ciaran Maxwell, police made the statement that the days “arrest was planned and intelligence-led as part of an investigation into Northern Ireland-related terrorism being led by SO15 [Met’s counter-terrorism command] in collaboration with Police Service of Northern Ireland (PSNI) and the south west counter-terrorism intelligence unit.” [1] The threat to British mainland by Irish terrorism is being raised substantially and officers continually race to end the staged attacks. Threats from dissident republican terrorism are becoming increasingly potent and they are getting increasingly more potent with every staged attack against Britain and the Northern part of Ireland. When Britain first took its force onto Ireland around the early 1900’s, chocking all the peace and unity, what were the reactions from Ireland, and what was the additional force Britain used that caused Ireland to put terrorism upon Britain still to this day?

The British government came into Ireland with the hope that they could build a strong relationship as two nations. During the early 1900’s Ireland split in two creating the North and South side, because of the differences they had. These differences included the way each side believed a country should be run, their religious beliefs, opinions on their own military force, and military force from other countries. One thing is for sure, having the militant force from Britain in the control of the home country of Ireland, upset many people and was the beginning of many problems. One of these many problems was the Government Ireland Act of 1920, or the Fourth Home Rule which was created by the Parliament of the United Kingdom to help Ireland build a better government, however it only took effect in Northern Ireland. Ireland’s Government had asked for the help from others, but began to feel threatened for they felt that allowing others to come in and take control made others such as Britain believe they could do this whenever. Having that threat looming over your head that someone may swoop in and take over just because you asked for a little help one day, is irritating and understandably frustrating for the country of Ireland. Both sides of Ireland have their thoughts on Britain and the way it is ruled, and even if Britain was really just trying to help, Ireland is still filled with anger. It seems to be a self-conscious problem; Ireland doesn’t want the world to see they need Britain to step in when they have issues, and with the anger built up in its people they began a reign of terror upon the people of Britain.

During its ‘Victorian’ era as it could have been called, Ireland had a relationship with the imperial power of Britain. This is during the very late 1800’s and early 1900’s when the North and South of Ireland were united having a stable relationship with Britain. There are a few reasons for why Britain and Ireland formed that relationship, one being “creating a single market between Britain and Ireland, it established a free-trade zone, an important precedent in the gradual progression of economic liberalism.” [3] Another reason being for harmonization of taxation, and thirdly Ireland’s hub of the economy shifted from being in Dublin to London which meant Irish interests went to British interests. In the 1920’s Ireland and Britain did not agree on a treaty that was created and Britain became an enemy to Ireland all over a disagreement on one treaty, causing them to place an army in Dublin occupying buildings to prove they felt they knew what was right. After this act the Union between these two countries ends, and not only did Britain and Ireland begin to turn on each other but the North and South side of Ireland turned on each other and those are the reasons for the later split up between everyone.

After WWI the British Labour Party was forced to create a relationship with Ireland whom was over time becoming an increasingly militant nation. It was believed that with a reshaped relationship between the two countries even with their differences, that together they could become strong, responsible, powerful, and instead of fighting end any and all fights that were brought to them. However, there was fear of the relationship being too close or a negative impact coming out of the relationship, various conferences have been held in the United Kingdom and rules have been made for home and outside the United Kingdom with Ireland. This rule making and determination to evolve their knowledge and power onto Ireland causes the Independent nation to become angry with the United Kingdom Empire, just as expected yet again Britain makes a choice that does not settle well with Ireland. The Parliamentary Labour Party (PLP) makes the confusing and often contradictory decisions that impact the future of Ireland and the relationship with Britain, even if neither of the countries likes the decision that is made.

After Ireland had for good changed its relationship with Britain and then Ireland split into two becoming the North and South, the North faced some consequences. Some people believed that Ireland would have split up even without the war, and Northern Ireland began leaning more towards independence and self-government. Britain and Ireland did not want to be mutual friends anymore; Ireland wanted its own identity. The end of the war came and a “majority of Nationalists apparently supported the establishment of an Irish republic outside the British Empire. It is, however, more accurate to state that the vast majority of the nationalist community would have settled for a form of self-government on Dominion lines, within the Empire but outside the United Kingdom.” [5] People were able to see this happen in the future because there were problems beginning to form in the connection that Ireland and Britain had, and these conflicts were hoped to be short but they sadly they lasted and were not resolved. The process had begun leading to the polarization of both nations identities. Figure 1 helps give an idea of what happened when Ireland decided to split in ‘half’ between North and South, as pictured the South got most of the land, it is one island but two countries.

Figure 1, image shows the division of Ireland between the North and South.

Alvin Shuster, newspaper journalist, wrote a special for the New York Times on August 20th of 1969 covering the story of Britain taking control of Ireland and their President Ulster’s security in Ireland. The hope was that once the British force took control of Ireland there would be peace and a lesson for other countries so that the religious and government problem would be resolved and would not reoccur anywhere else. The British government took full control of authority and security in Northern Ireland; this is to help Ireland’s President Ulster end the discrimination against the minority of Roman Catholics, and help the government gain control of its country again. Ireland was having a problem in its Northern part of the country with discrimination, and Britain stepped in to help end the discrimination, this was because it was causing pain on its own country by the fighting being brought to them as well harming its people, not because the government of Britain just decided to help Ireland out of good heart. It seemed that there was a plan Britain had on how to make progress on gaining control over the discrimination problem and then gaining control on the country, and all was well with receiving help but it did not settle with the people or the government knowing someone else was taking charge. What was wanted was the reassurance that the British military could help bring an end to the fighting and there is hope that each nation will become level and be able to get along. Ireland may not have liked it, but “Britain retained the power to intervene in the North in time of emergencies.” [6] Britain proved to be a good help whether Ireland likes it or not and they like the power and knowing that keeping Ireland safe from fights means they will be safe as well.

A book is written on International Security on the current life of Ireland and Britain ten years after Britain stepped in to stop the discrimination against Catholics. The book ‘International Security’ is written so that the world is aware of each countries’ predicament and the connections between everyone whether they are good or bad, and what political movements have been taken. Ten years later one would think the terror would end, but it turns out that the massacres have grown around the country, and some believe that Britain becoming involved in the first place may have solved some problems but also created or strengthened others. There have been many solutions but none seem to be able to solve the conflict on how to get Northern Ireland to be on the same page and end what is no longer just discrimination but grew to pure violence or terror towards other countries such as Britain and other problems such as the economy have arisen. The question is if the British military force can remain to be successful and if it remains necessary, or can the IRA handle the situation before ties between the two countries are broken.


Figure 2: British troops confront Irish protestors during the 1920 Ireland War of Independence. This picture is from Gerard Murphy called ‘The Year of Disappearances’ in 1920.

Ireland has developments but also issues politically, economically, and socially for the North and South, especially after the treaty of 1920. There used to be such civil peace and a rising prosperity but then troubles and tensions led to violence. In figure 2 it is seen the British troops are in Ireland; it’s the year of 1920 and it’s the war for Ireland independence and the British troops are confronting the Irish protestors. There are views on Ireland in British discussions and Irelands future and “Ireland is a single nation, defined by geography, and entitled to self-determination. Britain is seen as the historical impediment to the achievement of that ideal. Central to the Unionist interpretation is that Ireland is home to two nations, each entitled to self-determination.” [8] None of this is new and has been at the heart for a while, and a hope always at the bottom that the North and South would become united once again work together, but this can only happen if the North is willing, because the North has the upper power and strength. The discrimination against Catholics have come to a close, but when looking at unemployment rates for them and protestants there is a feeling there is a part of the fight still there because more Catholics are unemployed especially in Northern Ireland, which is another reason that Ireland becomes angry and who to blame but the country that stepped in and tried to solve everything. However, there is a social and health problem that is feared that keep the unemployment rates high, therefore causing the country to work to find a way to fix it. Economic problems for Ireland is when Britain declared they no longer wanted to remain with Ireland in economic interests and this was a hard reality mostly for Ireland because they lost a large market and this hurt its people economically which caused more hatred towards Britain. There have been some political negotiations but none have turned into a good, and this is an uncomfortable realty that created a complex picture of the future for Ireland.

 Figure 3: A picture of a poster used to try to get the Ireland people strengthened to fight to get the British troops out of their country.

Clearly as stated many times, one of the borders of the Irish is religion, and civic commitment, along with some ethnicity, and language. As pointed out at the beginning the history and connection between Britain and Ireland is very rocky and not the cleanest record. Figure 3 is a poster that was used in the late 1900’s to get the Irish people to feel empowered so that they fight to get the British troops out of their country. Today Ireland still struggles with that border of religion as it is a part of what caused the hatred Ireland has towards Britain and civic commitment, because Ireland cares about its people and having control but it cannot do that if another country, Britain, steps in and takes that control from them and even causes the problem to be worse because that also just starts a fire of anger. Irelands politicians became inspired with a development of a tighter organization and realized its significance to its country knowing it can do it alone with the over controlling help of Britain. However, what it now must bring to a close is the terror people wish to bring upon the country that put them through so much and caused them to hurt in areas such as money, health, education, military, government, and religion. It’s known that with all that military force put upon them by Britain and taking control instead of just lending a hand, and then making some problems worse and creating new ones is part of the many reasons for the terror against the country, now the question is, how can Ireland alone end that? Surely they have learned their lesson and will not bring in another country to make their decisions for them and tell them how to run its own country, but what actions will they take to fix the problems and end the terror?

Britain came into the country of Ireland in the 1900’s and choked all the peace and unity from the relationship when Britain tried to take control, even though at the beginning it was to just help. Britain at first was supposed to just step in and help bring control and end the discrimination and violence in the country of Ireland, this was to help Ireland an protect Britain form being in harms way as well. However, it had gone to far as soon as the British military force stepped into the country of Ireland taking over and leaving what seemed to be no power for the government of Ireland while the whole time claiming it was to help them. With these actions it caused Ireland to lash out and fight for their Independence and freedom from the Great Britain, which then caused Britain to of course fight back. This caused a circle where each country is getting angry at each other for actions that they both began in the 1900’s, and today they are still having problems with terrorism especially with Ireland attacking Britain. People read the paper and think to themselves, what did Britain do to deserve this pain and suffering and why is Ireland full of such evil people. But, what they do not know is how Britain stepped in even without being asked and caused more problems then stopping them and religion came into the mix, along with the military force taking over the government of Ireland because they felt they knew what was best, they’ve had disagreements on power, military control, and treaties. Amongst all the ‘help’ Britain supplied to Ireland it really was more stepping stones towards a country of Irish people filled with anger towards Britain, not all but many which was also one of many reasons for the split between the Irish Country. It is best to ask for help when really needed and knowing that you can control the help that comes in, and we must stand united, strong, and have good relationship no fighting or wars just peace.


[1] Vikram Dodd and Jamie Grierson, “Royal Marine arrested over suspected Northern Ireland terror plot,” The Guardian, August 25, 2016, (accessed June 30, 2015).
[2] The Guardian, August 25, 2016.
[3] Hilary Larkin, A History of Ireland, 1800–1922 (Anthem Press, 2001), 100-400
[4] Ivon Gibbons, The British Parliamentary Labour Party and the Government of Ireland Act 1920 (Parliamentary History, 2013), 1-506.
[5] Thomas Hennessey, Dividing Ireland (Taylor and Francis, 2014), 41-232
[6] Alvin Shuster, “British take over Ulster’s security,” New York Times, August 20, 1969,
[7] Brian Garrett, “Ten Years of British Troops in Northern Ireland,” International Security (1979): 80-104.
[8] T.G. Fraser, Ireland in Conflict 1922-1998 (Taylor and Francis, 2005), 73-215
[9] Hugh F. Kearney, Ireland, Contested Ideas of Nationalism and History (NYU Press, 2001), 128-292.
[10] Ivon Gibbons, The British Parliamentary Labour Party and the Government of Ireland Act 1920 (Parliamentary History, 2013), 97-261.


Figure 1. The Island of Ireland, 2014,

Figure 2. The Year of Disappearances, 1920,

Figure 3. Troops Out of Ireland!, November 15,1973,

Humans, Agriculture, and Environment: The fuel for an ever-changing landscape in Brazil.

Download PDF

Title: Humans, Agriculture, and Environment:

The fuel for an ever-changing landscape in Brazil.

Humans are the social underpinning of change upon the environment. The landscape will change depending on the many human focused agendas, according to our values and beliefs. “On the social plane, technological progress is closely interconnected with the common problems of the society.” [1] Land use changes are relative to human consumption including agriculture, resource use, development, profit, and politics. Humans have historically fueled changing landscapes. Was the institution of monoculture agricultural practices a major contribution to an ever-changing environmental and social landscape in Brazil?

There is historical evidence of small-scale deforestation by human activity since the beginnings of mankind. However, large-scale deforestation is said to be due to colonial/capitalist industrial activities including monoculture agricultural practices. The following historical evidence will introduce the beginnings of large-scale deforestation of the world’s largest tropical rainforest; The Amazon.

Brazil is a country located near the equator and hosts the world’s largest tropical rainforest. The Amazon rainforest is home to the largest biodiversity of species, plant, insect, and animal on the planet. This rare gem is noted as being the heartbeat of the precipitation patterns on Earth. The Federative Republic of Brazil, with an area of 3,286,500 square miles, is the largest country in South America and shares a boundary with every country of the continent except Chile and Ecuador. Slightly smaller in size than the United States, Brazil is the fifth largest country of the world. Most of it falls within the Tropics, but the populous industrial zones of the south are in the temperate zone. It is a land mainly of flat and rolling lowlands in the north with some plains, hills, and mountains in other parts of the country. Its highest elevation is Pico de Neblina, at 9,886 feet.” [2]

Figure 1. Geographic map of regions of Brazil

“As Michael Williams notes in Deforesting the Earth, deforestation is as old as the human occupation of the earth. Half of the forest that has vanished from the earth was gone before 1950 (see Map 15.1). But the footprint of humans on a landscape is not always that of a logger’s boot leaving destruction in its wake – sometimes, forests spring up in human footsteps, particularly when people suppress fire or build soil for agriculture, and then abandon plots.” [3] It is the instance of human engineered agriculture and subsistence that first allows for environmental aggravation of a landscape. While homestead subsistence is small-scale, the institution of commercial scale for profit agricultural practices is vast in its reach. Human interaction with the environment after the use of fire and homestead has always been taxing. When the colonial powers toured around the globe in search of resources and profit by acquiring land and inhabitants, they created large-scale infrastructure, which engaged in deforestation of the same magnitude.

 Figure 2. Brazil Provincial Railroads 1902?

This artifact is a map of Brazil possibly in the year 1902, labeling imperialistic colonial railroads throughout the country from coastal ports to the interior country, including the Amazon rainforest. Native Brazilians did not build these railroads; they were built by the colonial rulers in order to siphon Brazil’s rich resources into the pockets of the colonists. The natives did not have the infrastructure or education of the colonial powers. They instead relied on local resources to produce and trade local goods and services. This railroad system was an intrusion on the natives by way of being stripped of their land and labor for the benefit of the colonial power populous. This map is historical evidence of early deforestation by imperialists for mining, timber, and agricultural use. The natives didn’t have sovereignty to their land anymore; they didn’t even own the map.

During the era of decolonization, a United Nations intellectual think tank organization formed in effort to fight world hunger. The Food and Agricultural Organization (FAO) of the United Nations was founded in 1945. The FAO conducts global forest resource assessments through country reports done by National Correspondents, on forested points of interest around the globe. Their first report commenced in 1946 and published in 1948. This is the year that Norris Dodd, the Director-General provided a statement of conservation regarding tropical forestlands in South America including Brazil. His idea of conservation included the possibility of the abundance form agricultural means in on this tropical fertile land in the form of deforestation. He cited that from the harvest of these lands, “a comprehensive and unified program of conservation designed to replace scarcity with abundance” and that this land, “may provide a continuing flow of products to satisfy human wants”. [4] This statement of conservation was cited from the introduction of a section titled Unsustainable Yield: American Foresters and Tropical Resources from a book entitled Insatiable Appetite. The era was colonial and the idea was based on imperialism turned neo-colonial/capitalism. This type of ideology and practice may very well also explain the ideology and practice of racism and slavery under the themed agenda of entitlement and superiority, affecting not only the environmental landscape, but the social landscape as well. This report was one of European land grabbing for commercial-scale resource use in the name of fighting world hunger.

Figure 3. History of the FAO, UN with photo of the founders.

‘“Monoculture, slavery, and latifundia, but principally monoculture; they opened here, in the life, the landscape, and the character of our people, the deepest wounds.” [5] Early in an extended 1937 essay on the Brazilian Northeast, the Pernambucan writer Gilberto Freyre sketches this melodramatic summary of the Pernambuco sugar region’s historical inheritance. Freyre indicts the brutal, race-based system of bondage and the concentration of land ownership in the hands of a powerful and avaricious few as destructive forces in his region’s past. But he felt that the ills of monoculture— the extensive cultivation of a single crop— exceeded either of these. Sugar, not the people who profited from it, brought slavery and required production on a large scale, foreclosing on the emergence of a balanced, diverse agricultural society. He used similar imagery elsewhere, describing, “the two running sores of monoculture and slavery, two wide-open mouths that clamored for money and for blacks.”  For Freyre, sugarcane and African slavery tore into and lay open the land and left behind an injured society. They polluted rivers with sugar mill waste, destroyed vast forests, and fostered the brutal domination of slaves by masters.” [6]

Globalization and the drive for intensive resource use is the historical and contemporary theme for the deforestation of the Amazon rainforest in Brazil. Globalization is based upon capitalism and international trade with human consumption patterns and incleasing population. Globalization is both a principle economical value to the highly industrialized nations and a societal ill lacking equity for all of Earth’s inhabitants, plant, animal, and insect. “The time seems to be right for a major effort to gather all our present knowledge on Amazonia’s history and evaluate the problems and existing controversies…Amazonia is suffering badly from human activities…The importance of the Amazonian rainforest and its enormous biodiversity for the conservation of the environmental equi-librium of the earth can only be underestimated. Moreover, the expected negative effect of the disappearance of a major part of the forest on both Amazonia and Earth as a whole, would affect us all…The conservation of Amazonia, and a better understanding of its plant, animal and human life, is doubtlessly related to the future well-being of our planet.” [7] The definition of conservation in William’s book is heavily contrasted according to the views stated in the 1948 Food and Agriculture Organization by The United Nations (UN FAO) report regarding the term and the ideology of conservation. Conservation is by its very definition is about conserving resources and not exploiting resources, such as was identified in the 1948 UN FAO report.

The historical roots of deforestation by human activity, tell us that industrial commercial scale endeavors were the driving force for large-scale deforestation of the amazon rainforest in Brazil. Even though any human activity caused deforestation after the invent of fire, these means were of subsistence and of small-scale environmental degradation. When asking the question: was the institution of monoculture agricultural practices a major contribution to an ever-changing environmental and social landscape in Brazil? History tells us that subsistence farming is rooted in diverse cropping methodology and practice for family and small village consumption. By contrast, industrial commercial practices are focused on large areas for single resource and crop activities. When a small area in a forest is cut, the fertility of the forest will ensue once the small area is abandoned. Whereas, when a large area is deforested using clear cut forestry practices or large scale burn practices, the area is too large for the forest to return to its original form. These factual scenarios in broad contrast, tell us that the institution of commercial agricultural cropping practices indeed contributed to an ever-changing environmental landscape. Colonial history also tells us that monoculture institution fueled the change of the social landscape for native Brazilians as well as colonial populous and their African slave laborers. The large-scale deforestation by way of colonial, neocolonial/capitalist ventures also affects the contemporary environmental and social landscapes for the Earth and all who inhabit the planet, plant, animal, and insect. The Amazon is said to produce twenty percent of our atmospheric oxygen, which is deteriorating each passing day with commercial activities. Globalization, whether colonial, neocolonial, or contemporary is the driving force of deforestation of the Earth’s largest rainforest; The Amazon.


[1] Author Unknown, “Influence of humans on environment”, The Assam Tribune June 05, 2013, (accessed July 1, 2017)

[2] Meade, Teresa A. A Brief History of Brazil. 2nd ed. New York: Facts On File, 2010.

[3] McNeill, J. R., Stewart Mauldin, Erin, and Mauldin, Erin Stewart, eds. Companion to Global Environmental History. Somerset: Wiley, 2012. Accessed July 5, 2017. ProQuest Ebook Central.

[4] Norris Dodd, Introduction to Unsustainable Yield: American Foresters and Tropical Timber Resources; or Insatiable Appetite The United States and the Ecological Degradation of the Tropical World, by Richard P. Tucker. Berkeley, California; London: University of California Press, 2000.

[5] Rogers, Thomas D. The Deepest Wounds : A Labor and Environmental History of Sugar in Northeast Brazil. Chapel Hill: University of North Carolina Press, 2010.

[6] Moraes, Mello, and Toppa. “Protected Areas and Agricultural Expansion: Biodiversity Conservation versus Economic Growth in the Southeast of Brazil.” Journal of Environmental Management 188 (2017): 73-84.

[7] Hoorn, C., and Wesselingh, F. P. Amazonia–landscape and Species Evolution a Look into the past. Chichester, UK ; New Jersey: Wiley-Blackwell, 2010.

[8] Williams, Michael. Deforesting the Earth From Prehistory to Global Crisis, An Abridgment. Illinois: University of Chicago Press, 2010.


Figure 1. Geographic map of regions of Brazil. Meade, Teresa A. A Brief History of Brazil. 2nd ed. New York: Facts On File, 2010. pg. xv.

Figure 2. S.l. Lith: do Imperial Instituto Artistico, “Brazil Provincial Railroads”. 1902? via Accessed July 11, 2017.

Figure 3. Food and Agriculture Organization of The United Nations.

The Rise of Radical Islam by U.S. Cold War Policies

Download PDF

On January 27, 2017, President Trump introduced a travel ban that included seven majority-Muslim countries. The move sparked outrage and was labeled a “Muslim Ban” by many. Recently, the Supreme court ruled to uphold an altered version of the ban. Previously, those who even held valid visas in any of the banned countries were not permitted to enter the U.S., and with the Supreme Court revision, they can. However, new visa applicants residing in any of the banned countries will not be permitted entry for 90 days if they cannot prove they have a “bona fide relationship” with close relatives or a business in the United States.” [1] This also applies to Syrian refugees who will be banned for 120 days if they cannot prove the relationship exists, excluding those  refugees “already vetted and approved for travel through July 6.”[2] Additionally, Iraq is no longer a banned country. With this ban in mind, how has U.S. intervention economically and militarily in the Middle East during the Cold War era affected the rise of radical Islam? Specifically, how has the United States’ oil/gas obtainment efforts and the strategic positioning of troops/bases contributed to the rise?

The United States’ international expansion efforts in the Middle East during the Cold War era, with regards to obtaining natural resources and a strategic advantage, is partly responsible for the rise of radical Islam.

This theme can be seen in the film “Oil for Dollars” by the Standard Oil Company of California, which is essentially an American propaganda film highlighting the benefits of oil extradition in the middle east while attempting to give off the theme of mutual respect and cooperation. The film said that America’s oil drilling in Saudi Arabia “…provided a richer life to the Saudi Arab…,” by employing them at refineries. However, the film demonstrates an example of a Middle Eastern regime seemingly also having “pro-western” policies, which had the same negative affect as Pahlavi’s stance I mention in the next paragraph, reminding many Muslims of how Western culture, specifically American culture, is inverse Islamic culture. Of course, among these Muslims are radical Muslims willing to exploit the event as a recruiting opportunity.

Figure 1: Oil for Dollars propaganda video.

The struggle for control of the Iranian government in the 1950’s would have been a different battle if not for America’s and other western countries’ foreign policies. In fact, Mohammad Reza Shah Pahlavi took control of the government in 1949 and was criticized for his “pro-western” foreign policy. The editors of Encyclopedia Britannica note in their article “Mohammad Reza Shah Pahlavi, Shah of Iran” that Pahlavi “faced continuing political criticism from those who felt that the reforms did not move far or fast enough and religious criticism from those who believed westernization to be antithetical to Islam.” In other words, many felt Pahlavi’s pro-western policies were not compatible with Islamic culture, awakening many radical Islamists of the “burden” of western culture. These events only motivated more radical Islamists to act out violently. Overall, America’s somewhat “spread” of western values via the cold war oil grab actually resulted in momentum gained by Islamic terror groups.

Figure 2: Pahlavi meets with president Roosevelt. 

Moving on, professor Paul Thomas Chamberlin wrote to University of Kentucky students about America in his journal “America’s Great Game: The CIA’s Secret Arabists and the Shaping of the Modern Middle East, Nixon, Kissinger, and the Shah: The United States and Iran in the Cold War,” saying “…they ended up replicating much of the British imperial experience in the Middle East.” With this quote, Chamberlin is referring to the enormous increase in U.S. military presence in the Middle East during the Cold War era, and America’s new image as the “colonizer” which resulted in radical Islamists yet again using that to run on a sort of rebellion campaign. Just like Doenecke, Chamberlin’s authoritative position as a professor establishes some common ground between him and the United States government in the scenario he wrote about.

A journal titled “Revisionists, Oil and Cold War Democracy” by Justus D. Doenecke broadly but accurately depicts America’s aggressive scavenger hunt for oil and other natural resources in the Middle East during the cold war era. The Iranian journal aims to inform college students taking Iranian studies on America’s middle eastern oil politics, and with the journal being written in 1970, it’s contemporary relevance and Doenecke’s credibility as a professor likely caused it to have a lasting impact. Doenecke asserts that “Petroleum has historically played a larger part in the external relations of the United States than any other commodity,” and that “…America was indeed insisting that Anglo and Soviet pipelines, constructed with her lend-lease aid, be made available to her own companies after the war.” Here we can see that America apparently forcefully inserted itself into the Middle East, and Doenecke includes the keyword “insisting” to suggest that America was met with opposition in this regard. Doenecke’s points support my claim that America’s efforts to obtain oil and natural resources in this period contributed to the rise in radical Islam due to the fact that America’s aggressive and frankly invasive approach gave radical Islamists at the time a recruiting tool of sorts. It allowed terror groups to frame America as the intruder, alluding that rebellion through terrorism was righteous. The idea of attacking in the name of retaliation was much more preferable to many than the idea they were attacking an innocent country. This idea was also formed in response to military advancements like what I mentioned in the previous paragraph, and substantiates the claim some make in response to terror attacks on our soil: “We asked for it.”

Figure 3: An example of what the complex network of pipelines that American companies sought access to looked like.

Radical Islamists were already aware western culture was in many ways the antithesis of Islamic culture, but the recent implementation and support of western policies/culture right in their neighborhood, such as the last two examples I mentioned, allowed their message to be more easily understood and echoed, therefore accelerating the process of targeting western countries like America for terrorist attacks. This time period covered in this piece encompassed a “Muslim awakening” like what I mentioned in some of my previous paragraphs. This statement is not meant to generalize and I am not alluding all Muslims are terrorists. I am simply stating that some of those not previously considering affiliation with or support of an Islamic terror group in this time period were moved by some of America’s Cold War activities and terrorists’ exploitation of them.

In conclusion, America’s venture into the Middle East in the Cold War era seemed to poke a sleeping bear, as it appeared to “activate” radical Islamists throughout the region through the provocation that was a seemingly invasive approach towards obtaining petroleum and other natural resources, an increased military presence giving off a colonial essence, and the adoption of pro-western policies by regimes in the region. While the target on America’s back was going to be placed their inevitably by Islamic terrorists through a desire to fulfill the Islamic caliphate, her Cold War era activities gave terror groups an edge in persuading others to support their cause and means both materially and spiritually.


[1] Bangkok, “United States: Travel Ban, First On, Then Off, Is Back – But It’s Different.” Asia News Monitor, July 3, 2017, (accessed July 30, 2017).

[2] Asia News Monitor, July 3, 2017.

[3] Rashid Khalidi, Sowing Crisis (Boston: Beacon Press, 2009), 308.

[4] Roby C. Barrett, Greater Middle East and the Cold War (New York City: I.B. Tauris, 2014), 521.

[5] Jeffrey James Byrne, “The Middle Eastern Cold War: Unique Dynamics in a Questionable Regional Framework,” Cambridge University Press, Volume 43, Issue 2 (May, 2011): 2

[6] Vassilas K. Fouskas, Zones of Conflict (London: Pluto Press, 2003), 184.

[7] Justus D. Doenecke, “Revisionists, Oil and Cold War Democracy,” Iranian Studies Volume 3, Issue 1 (Winter 1970): 10.

[8] Paul Thomas Chamberlin, “America’s Great Game: The CIA’s Secret Arabists and the Shaping of the Modern Middle East, Nixon, Kissinger, and the Shah: The United States and Iran in the Cold War,” Cold War History Volume 15, Issue 4 (November, 2015): 3.

[9] The Editors of Encyclopedia Britannica, “Mohammad Reza Shah Pahlavi,” Encyclopedia Britannica, December 15, 2016, 1.


Figure 1. 2016 map of oil and gas pipelines/fields by Southfront.Org, 2016,

Figure 2. 1943 image of Mohammad Reza Pahlavi meeting with U.S. president Franklin D. Roosevelt during hte Tehran conference, 1943,

Figure 3: Propaganda video made by the Standard Oil Company in California depicting U.S. oil extradition in the Middle East, 1948,



Research Assignment #5

Download PDF

Governmental Instability, Drugs and Violence Lead to the Immigration “Crisis”

There is a renewed call to stem the tide of illegal immigration from Central America. Many people in the country, largely led by the current administration, blame illegal immigration for a number of problems in the United States. Illegal immigration is said to be responsible for an increase in crime, a drain on governmental resources, and the loss of jobs that should go to citizens. As a real and symbolic gesture to the governments’ desire to rid the country of illegals, and prevent others from coming, a wall running along the U.S. and Mexico border, towering thirty feet high has been proposed. Despite the negative rhetoric and the tendency to blame immigrants for the cause of America’s ills, immigrants continue to make their way to the U.S. Why are conditions so bad in these countries that immigrants continue to come to the United States despite the admitted and obvious dangers? The violence can be traced, in part, to the United State’s long record of interfering in the affairs of Latin American countries and the long standing exploitation and profit made off the backs of Latin American citizens.

Following the the U.S. intervention, it helped establish or supported brutal dictatorships in Latin America, including in both Guatemala and El Salvador in the 1950’s. And a result of the U.S. supported dictatorships extreme inequality became the norm in both countries economies, and the lack of the possibility of political dissent, led to guerilla movements in both places in the 1960’s. What began as U.S. intervention into each country ultimately caused civil war and violence. [1] As instability reigned drug lords came in to fill the void.  In a 2014 Wall Street Journal article, Mary Anastasia O’Grady suggested that we consider the imperial role of the United States in fomenting the immigration “crisis” of today. O’Grady argues that the United States’ appetite for drugs is a cause of the violence that made life unbearable in much of Latin America. [1] It is estimated that the homicide rate in Honduras is 90 per 100,000, and 40 per 100,000 in Guatemala. [2] As the United State’s immigration policy becomes more strident and immigrants are deported and turned away at the border many questions arise. How did the U.S. contribute to the governmental instabilities and drug culture that contributed to the current immigration “crisis”?

 Figure 1: Painting of The Guerrilla Movement Acts.

The U.S. contributed to the current immigration “crisis” by causing destabilization in Latin America’s political system by supporting dictatorships, leading to guerilla movements and civil war, which eventually gave rise to the drug trade spurred on by U.S. demand.  In an attempt to escape the violence that only escalated under the drug lords, immigrants moved to the United States for a better way of life.

Figure 2: Here is a picture of Colonel Castillo Armas in Guatemala, 1957.

The study of U.S. intervention in Guatemala begins with Colonel Arms.  Colonel Castillo Armas was an unlikely Guatemalan leader given his shameful exile. According to Paul Kennedy writing for the New York Times on July 4, 1954, “Exactly two years ago from the day he landed in Bogota, Columbia, a political refugee with a price on his head, he came back to the capital of his country here and received a thunderous welcome.” [3] Propped up and supported by the U.S. government Armas referred to the works of the previous regime led by president Jacobo Arbenz as “the farce that has been taking place here.” Continuing the tradition of violence common in Guatemala Armas’ military government immediately executed enemies of the state that included prior Communist leaders and supporters. Backed by the U.S. government Armas marched into the country accompanied by U.S. Ambassador John E. Peurifoy. But the violence did not end as the military junta eliminated Communist enemies by firing squad and other lethal methods. Armas was a leader that was welcomed with open arms by Guatemalans who, through the support of the U.S. in the United Nations, was the symbolic and actual end of communism with hope of brining stability, wealth and relief to a weary people.

It would be difficult to exaggerate the misery of the mainly-Indian peasants and urban poor of Guatemala who made up three quarters of the population. By the early 1960’s income inequality had taken hold in the country and a few hundred families possessed almost all the arable land, public health services were virtually non-existence, thousands of families were without work, jammed together in communities of cardboard and tin houses with no running water or electricity. Against this backdrop the guerrilla movement was organized writes New York Times Norman Gall in 1971. [4] The guerillas were organizing peasant support in the countryside, attacking army outposts to gather arms, staging a kidnapping or bank robbery to raise money, while trying to avoid clashes with the Guatemalan military. Recruitment of the peasants was slow and difficult because people were simply struggling to stay alive much less able to muster the courage to fight back. The guerillas were factious groups with no real ideology except for the desire for a more equitable society and nationalist pride.  The American military mission stationed in Guatemala viewed the movement as a “communist threat” and took steps to eradicate it by establishing a base designed for counter insurgency training. [5] To encourage the peasants to abandon the guerrilla movement the U.S. provided some benefits like built wells, distributed medicines, and provided school lunches. However, no real reform was undertaken by the U.S. leading to increased insecurity and instability.  [6] The insecurity and power vacuum resulted int he growth of the drug trade that worsened the violence.

Figure 3: Guatemala Drug Trafficking Routes in 1950.

The drug trade in Guatemala and in Latin America consists of mostly cocaine and cannabis. The drug trade not only hurts the country as a whole but also involved more violence, instability and inequality. The drug trade started with family-based groups with long criminal histories that were involved in contraband, human smuggling and much more criminal activities, which provided them a dangerous foundation to jump into narcotics trafficking. According to Nicholas Cage of the New York Times, the drugs were being handled by businessmen and professionals who have grown so politically and economically powerful that they can operate with virtual immunity arrest and prosecution.  [7] The violence that occurs during the drug trade time is mostly because of the gangs that got involved in distribution, and also the region’s use as a transshipment point for U.S.-bound narcotics. [8] For example, in 1975 the drug gangs killed 40 people in one weekend after police seized 1,320 pounds (600 kgs) of cocaine in one of the first ever big drug hauls. The violence and the fragile institutions and the large amount of drug trade in Guatemala and Central America led tens of thousands of Guatemalans to arrive in the United States seeking asylum from their regions crazy skyrocketing violence and drug trade. [9]

Illegal immigration into the United States is the result, in part, to the U.S. early intervention into Latin American politics.  The establishment of brutal dictators led to extreme violence as the dictators sought to exterminate enemies.  As the dictators increased their power income inequality created extreme poverty to the majority of citizens.  As guerrilla armies formed to fight back against the dictators, violence and poverty persisted.  As the dictators were dethroned, political and power vacuums resulted that were eventually filled by the drug lords.  Drug trafficking continued to promote violence and poverty among the citizens not involved in the trade.    Immigration to the U.S. resulted, as citizens attempted to avoid the conflict for a better life.  Had the U.S. allowed Central America to operate its own governments without puppet dictators, would immigration to the U.S. have resulted?  It’s difficult to know the answer but it’s clear the U.S. contributed to the illegal immigration waves to the U.S. from Central America as a direct result of its activities in the middle of the twentieth century.


[1] Haussamen, Heath. We Must Treat Central American Immigrants Humanly. Las Cruces Sun News; Las Cruces N.M. July 14, 2014. (accessed June 30, 2017).

[2] O’Grady, Mary Anastasia. What Really Drove the Children North? Wall Street Journal, Eastern edition; New York, N.Y. July 21, 2014, http://ntserver1.wsulibs.wsu.wdu:2090/sfx_local?url_ver=Z39.88 (accessed June 30, 2017).

[3] Paul P. Kennedy, Special to the New York Times. (1954, July 04). “Guatemala Gives Leader of Revolt Rousing Welcome.” New York Times (1923-Current File).

[4] Norman Gall, Special to the New York Times. (1971, March 28). “Guerrilla Movements in Latin America.” New York Times (1923-Current file).

[5] Howard, David, Hume Mo, Oslender Ulrich. Violence, Fear, and Development in Latin America: A Critical Overview. Development in Practice, Vol. 17, No.6 (Nov., 2007) pp. 713-724.

[6] Henry Giniger. Special to The New,York Times. 1966. “Guatemala Fears Revived Violence” New York Times (1923-Current File), Jan 17, 1.

[7] Godsell, Geoffrey. 1980. “Central America Caught in Wave of Violence.” The Christian Science Monitor, Aug 18.

[8] Gootenberg, Paul. Cocaine’s Long March North, 1900-2010. Latin American Politics and Society, Vol 54, No.1 (Spring 2012),

[9] “11 in Family Slain in Guatemala.” 1980.New York Times (1923-Current File), Nov 18, 1.

[10] O’Grady, Mary Anastasia. What Really Drove the Children North? Wall Street Journal, Eastern edition; New York, N.Y. July 21, 2014, http://ntserver1.wsulibs.wsu.wdu:2090/sfx_local?url_ver=Z39.88 (accessed June 30, 2017).

[11] McPherson, Alan L. “Intimate Ties, Bitter Struggles: the United States and Latin America since 1945.” Washington, D.C.: Potomac Books, 2006. Print. pp.

[12] By, N. G. (1975, Apr 21). Latins now leaders of hard-drug trade. New York Times (1923-Current File) Retrieved from

[13] Balassa, Bela. 1971. “REGIONAL INTEGRATION AND TRADE LIBERATION IN LATIN AMERICA.” Journal Of Common Market Studies 10, no. 1: 58. Business Source Ultimate, EBSCOhost (accessed July 21, 2017).

“[14] Consumer Union Studies Drug Selling in Latin America.” The Hastings Center Report 5, no. 5 (1975): 2.


 Figure 1: Painting of The Guerrilla Movement Acts.

Figure 2: Here is a picture of Colonel Castillo Armas in Guatemala, 1957.

Figure 3: Guatemala Drug Trafficking Routes in 1950.




The People of the Sea

Download PDF

Vikings are famous for their violent raids and maritime capabilities. As a New York Times article describes, starting in the 8th century Vikings began exploring east and south of Scandinavia, traveling to Europe, Asia, North Africa, and the Middle East.  Furthermore, sagas recorded by monks in the13th-century also reveal that after exploring easterly and southerly, the Vikings then focused on expansion westward, particularly the British Isles, Iceland, and Greenland [1]. However, their expansion didn’t stop there. In the 1960s the first confirmed Viking colony in North America was found off the coast of Newfoundland, the farthest west Vikings had ever gone. Since this find over 5 decades ago, no other confirmed Viking sites have been unearthed in North America, despite archaeologists best efforts. The search continues though, and with the use of new satellite technology, it is becoming easier to identify potential archaeological hot-spots. According to the article, lead space archaeologist Sarah H Parcak and a team of Canadian scientists have used infrared and magnetometer satellite imaging to identify hot-spots ranging from Greenland to Massachusets, as well as a plausible Viking landing site in Point Rosee, Newfoundland which is currently under investigation [2]. Evidence found on this site dates to the Norse era, however, the site has not been confirmed as the second Viking colony found in the North America. If it is confirmed, it will be the furthest south Vikings traveling west had ever gone- and takes place centuries before Columbus’ voyage in 1492. Regardless of if this new site is confirmed, the vast expanse of the globe the Vikings traveled establishes their maritime prowess, especially in comparison to other cultures of the time. This success is especially unique given that Viking ships are structurally different than the ship Columbus used to get to the New World. This raises some questions about Viking maritime skills and voyages. What maritime skills allowed for Viking success so far in advance of other nation and how do these skills compare to those of the Wayfinders of the South Pacific? Finally, how do the boats made by the Vikings and the Wayfinders differ from the ship used by Columbus?

With so much of our known national and global history rooted in Central and Western Europe, society seems to primarily acknowledge the success of these nations since these nations typically have the most artifactual evidence, both written and physical. However, it is vital to acknowledge the feats and skills of all nations, even if we are only discovering those today with the help of advanced technology.

From what is known about Vikings, it is readily obvious that they were very different from other Europeans in many ways. They were far taller than many Europeans, followed a pagan religion, and were also very good sailors. More importantly is their maritime success prior to that of other nations, except for perhaps the peoples of the South Pacific. It is through their specialized maritime skills in the construction and use of their ships on the open ocean that granted this success so early.

Anything known about early Viking origins, history, and expeditions was orally passed down throughout the ages in the form of sagas, only to be later recorded by Christian monks in the 13th century as means of understanding the Vikings who had raided and pillaged villages and monasteries in Europe prior to agreeing to form a system of trade. These sagas were translated from Icelandic to Latin and subsequently to old English, and finally to modern English in 1972, therefore translational errors are to be expected. Also expected are variations of sagas since they were passed down orally for centuries prior to being transcribed. A particular saga of extreme importance is the Saga of Eirik the Red, a man banished from Iceland presumably for murder around AD 983 and forced to continue moving or face the threat of death [3]. Eirik is hypothesized to be the first Viking to make it to North America, however, he wasn’t the first to travel west of Iceland. In fact, part of Eirik the Red’s success is due to the exploratory success of his kinsmen who had first been blown off course to Iceland, only to discover southeastern Greenland [4]. Though there is some question as to the reliability of these sagas, however, modern scientific discoveries have found evidence that supports the sagas claims, even carbon dating back to the accurate time period.  These sagas serve as a primary clue to discovering the origins of the Vikings and to learning more about their culture, ships, and navigational techniques.

Ships were vital to many facets of Viking culture, used not only for trade and fishing but also for war and burials at sea. Additionally, the art of shipbuilding was highly valued and taught and practiced only by free-born men [5]. Ships continued to have increasing significance to the culture of the Vikings as the structure of the Viking ship changed over time, allowing for new opportunities. One of the oldest Viking ship specimen dates to around 320 AD, lacking sails and having only oars as a means of power, this ship was utilized for traveling locally and in protected waters [6]. Sails became common place in Viking ships between 700-900 AD, which allowed for greater distances to be crossed, as did the individual designs of Viking vessels which had regional characteristics, such as wood type, as well as structural designs that allowed for them to withstand the harsh elements of the Nordic climate, including sideboards  that are overlapping, fastened via iron spikes, and packed with hair and tar to make the ship waterproof [7]. Additionally, Viking vessels shared the characteristic of having a keel and sharply curved bow and stern, while the sides of the ship were slightly curved. All wood was cut by ax and joined together with iron or wood spikes and flexible lashing, resulting in a ship designed to be light weight, elastic yet stable, and capable of being steered by a side-rudder and powered by sail and oars.

Fig 1. An example of a burial ship- the Gokstad Vessel housed in The Viking Ship Museum near Oslo, Norway. Dated to around 900 AD.

However, there were vital differences in the structure and design of boats, depending on their purpose. War ships were long, narrow, and shallow with a full deck, and were powered by both removable sails and oars, allowing them to be used in both the ocean and in rivers. These were the ships utilized in long distance travel. Trade vessels, on the other hand, were deeper, wider, and had half decks and fewer oar holes to allow for more cargo room. Regardless of the ship type, Vikings utilized maritime skills in order to reach their destination. Vikings navigated the ocean through a variety of techniques including coastal navigation by utilizing characteristic landmarks and studying current patterns and local animals and navigated open ocean through sun-compasses, shadow boards, and star-sighting [8].  It is known that the Vikings utilized sun-compasses after a fragment of an 11th-century compass dial was found in Uunartoq, Greenland which helped them establish local solar noon and the length of the noon shadow out in open ocean [9].  This sun compass would contain notches denoting the 4 cardinal directions as well as a parabola of where noon would be around the summer equinox at their home latitude of 61 degrees. Whilst traveling they would analyze the difference in the local noon, marking it on the stone compass. Through the utilization of masterful ship building, navigation techniques, and tools, the Vikings were able to attain great maritime success far in advance of other nations.

Fig. 2 Artistic rendition of New Zealand War Canoe in Captain James Cook’s journal.


Like the Vikings, maritime expeditions were a cultural necessity for the people of the South Pacific, both as a means of trade and as a means of acquiring new land. The history of the people of the South Pacific starts during the Pleistocene ice age (40,000 BC) when they left Southeastern Asia crossing parts of the Pacific via ice sheets, and by 4000 BC they had open ocean sailing crafts [10]. Millennia later, Captain James Cook of Britain’s Royal Navy was key in the discovery of many South Pacific Islands and is one of the first European’s to have contact with its peoples. In his travel journals, Cook describes his experiences and records information learned about the peoples of each island with the goal of sharing these findings with the British Navy upon his return. At this time Britain was claiming and settling newly discovered, extending its borders to new territory. Educated and raised in Britain, Cook’s observations and commentaries are made through a technologically advanced, European educated, and imperialistic lens. When Cook sailed to New Zealand in 1770 and interacted In his journal, Cook admits that he was impressed by the ingenuity of the Maori boats he saw, including long narrow boats and larger canoes that Cook proposed were built for war [11]. All canoes were built with the same basic blueprint, ranging in size from about 20 feet long to almost 70, could carry 40-100 men, and were powered and steered via the use of paddles. All pieces joined together with strong plating, the bottom of the canoe was composed of 3 pieces of wood, the sides each made of single planks spanning the length of the canoe, and ornamental pieces at both ends of the boat protruding outward [12].

Fig. 3 Artistic rendition of Tahitian Canoes in Captain James Cook’s journal.

Other boats, like the long and narrow ones, he likened to New England whaling ships, revealing similarities in construction between the two cultures.  However, it is important to note that later discoveries have revealed that there are many types and styles of vessels indigenous to these islands, ranging from dug out canoes with outriggers to open ocean vessels- though few intact specimens exist due to the high biodegradation that occurs in these climates [13].

Fig. 4 Reproduction of the original chart of New Zealand explored in 1769 and 1770 by Captain James Cook

What remains intact though is the cultures and island populations they founded upon successfully sailing to a new island. The peoples of the South Pacific were so successful in utilizing ships to sail across vast differences in part due to their profound knowledge of the ocean, the sky and the wind, and their ability to mentally record and process the state of these conditions where they are leaving from, how they are along the journey, and how they have changed upon their arrival to a new place. Thus, they were able to trace their steps back to their homeland. For example, traveling from Hawaii to Tahiti, the Wayfinders would memorize the location of specific stars and star clusters in both locations and utilize these stars at night to reorient themselves and navigate between the two islands. This technique would also be utilized when finding new islands, as the star orientation would be mentally recorded as the traveled, hopefully, to a new but unseen island- called an etak [14]. At times when the stars were hidden, Wayfinders would utilize sea marks, currents, wave patterns, the position of the sun, and birds as a means of navigation. Thus, Wayfinders were able to navigate the South Pacific without the help of a physical map, such as the one Captain Cook used.


Both the Vikings and the Wayfinders were more advanced in maritime travel than other nations at the time, shared similar techniques in navigation, and built ships that were long and shallow as a means to avoid capsizing in the waves and utilized the natural elements to guide their navigations. However, a primary difference exists in the technology used by each people. Firstly, the Wayfinders predated the Vikings and were vastly more isolated from other nations and trade markets, meaning their resources were more limited and their technology less advanced. As previously explained, all their tools were made of either wood, rock, or bone, unlike the Vikings who could utilize iron spikes, axes, and saws. As a result, the Vikings were able to design and produce ships with differing structure and curvature. Additionally, the Vikings’ utilization of technology such as sun-compasses was not available technology for the Wayfinders. The utilization of such tools allowed for a means of travel and information recording that the Wayfinders of the South Pacific lacked. Instead, they had to more heavily rely on mental recall. It is this lack of technology and reliance on a profound understanding of the natural world and their ability to remember it that made the Wayfinders such impressive sailors millennia in advance of the Vikings- who, in turn, were more skilled and proficient than other nations at their time.

Further ship components and designs were created over time and allowed for greater success. These components include sails, oars, rudders, outriggers, or flexible water proof hulls that could withstand the rough seas. The similarities between the ships of each culture are shocking enough considering that there are about 4,000 years separating the two cultures initial use of boats, as well as thousands of miles separating the two cultures. Perhaps more shocking is how little the design of open ocean ships, such as the one that Columbus captained when he “found” the New World, is the exact opposite of those used by the Vikings and Wayfinders. Columbus sailed on the Santa María, a carrack, a fusion of Mediterranean and Northern-European style that first appeared in the 13/14th-century [15]. Carracks consisted of a wide and deep hull with a rounded stern, an aft and forecastle, and 2-3 masts, all of which weighed between 300-2,000 tons depending on how much cargo it held, and could be rowed by oars or pulled by a small boat if necessary [16]. While masts were shared by all three cultures, the characteristic that distinguishes the Vikings and Wayfinders from the carrack of Europe is the shallow, narrow hull of the ship. This is ironic given Cook’s testimony, which previously explained that the ships he saw in New Zealand quite closely resembled whaling ships found in New England, in the New World, across the Atlantic Ocean from Europe. Perhaps most Europeans approached the battle of crossing the oceans differently, aiming for a grand presence and ability to carry enough food for survival- and yet, it seems this approach slowed them down. Perhaps, if their own survival was more dependent on trade with countries not readily accessible by land they too would have been great ancient mariners.



[1] Ralph Blumenthal,”View from Space Hints at a New Viking Site in North America,” New York Times (Online), March 31, 2016. (accessed June 28, 2017)

[2] New York Times (Online), March 31, 2016.

[3] “Saga of Eirik the Red,” in Landnámabók, trans. Hermann Plásson and Paul Edwards. Kristen A. Seaver, Last Vikings, The The Epic Story of the Great Norse Voyagers (London: I.B.Tauris, 2010), 15-17.

[4] “Saga,” trans. Plásson, 16.

[5] Paul Christian Sinding, The Northmen: the sea-kings and Vikings, their manners and customs, discoveries, maritime expeditions, struggles, and wars. The discovery, and the thousand years’ anniversary of Iceland, (Paul Christian Sinding, 1883). 34.

[6] Per Bruun, “The Viking Ship,” Journal of Coastal Research 13, no. 4 (1997): 1282-289.

[7] Brunn, “The Viking Ship,” 1288.

[8] Balázs Bernáth, Miklós Blahó, Ádám Egri, András Barta, and Gábor Horváth, “An Alternative Interpretation of the Viking Sundial Artefact: An Instrument to Determine Latitude and Local Noon,” Proceedings: Mathematical, Physical and Engineering Sciences 469, no. 2154 (2013): 1-16.

[9] Bernáth, “An Alternative Interpretation,” 2.

[10] Alastair Couper, Sailors and Traders: A Maritime History of the Pacific Peoples. (University of Hawaii Press, 2009), 22-59.

[11] James Cook, “CAPTAIN COOK’S JOURNAL DURING HIS FIRST VOYAGE ROUND THE WORLD MADE IN H.M. BARK “ENDEAVOUR” 1768-71,” Cook’s Journal by James Cook, Accessed July 10, 2017.

[12] James Cook, “Captain”

[13] Couper, Sailors and Traders, 25.

[14] Charles O. Frake. “Cognitive Maps of Time and Tide Among Medieval Seafarers.” Man, New Series, 20, no. 2 (1985): 256. doi:10.2307/2802384.

[15]”Carrack or Nao,” The Mariners’ Museum | EXPLORATION through the AGES. Accessed July 19, 2017.

[16] “Carrack” The Mariners’ Museum.

Search Terms: histor*, maritime, viking*, pacific, atlantic.


Figure 1. Per Brunn, The Gokstad-Vessel Fig. 2 of “The Viking Ship” in Journal of Coastal Research 13, no. 4 (1997): 1282-289.

Figure 2. James Cook, Illustration 7, War Canoe of New Zealand, Cook’s Journal by James Cook,

Figure 3. James Cook, Illustration 5, Tahiti: Types of Canoes, Cook’s Journal by James Cook,

Figure 4. James Cook, Illustration 10, Chart of New Zealand, Cook’s Journal by James Cook,


Deforestation of the Amazon

Download PDF

For the talks of Global Warming, or more recently Climate Change, fossil fuels have always taken the role of the main nemesis for keeping carbon emissions low and greenhouse gasses under control. But there is also another side to the story: there will always be carbon emissions as life on Earth emits carbon, but where does it go and how is it dealt with? Nature’s biggest answer are trees, namely rainforests, and while Climate Change is increasingly becoming more of an issue, everyone seems focused on carbon emission. While that is still a problem, another issue that could solve many more problems is decreasing deforestation; Since the 1970’s “… the Brazilian Amazon has lost nearly a fifth of its forest cover already…”[1] The Amazon rainforest accounts for half of all the carbon tropical rainforests store and “recent estimates suggest a third of climate emissions (or even more) could be offset by stopping deforestation and restoring forest land”[2] and starting with the Brazilian Amazon would be a huge step in reversing the negative trend that we are currently on. Even in the face of this possible solution, or at least turning point, the Brazilian government have made a proposal to allow further cutting and clearing of the tropical rainforest to make room for raising more crops or livestock. How has the importance of economic expansion been placed above nature? What steps have been taken to cause deforestation on such an alarming scale?

Much like America before WWII, Brazil had its own industrial awakening after the war. The country spurred to life and began cutting, clearing and burning the Amazon to make room for agricultural expansion as well as increasing industrialization that exploited natural resources like rubber and oil. In recent years, the change of government from despotic to democratic has lead to less promotion of expansion into the Amazon and also created new policies that are more severe to those who illegally delve further into the rainforest. I will begin by explaining what factors promoted this profiteering of the vast rainforest, both industrial and agricultural, starting in the colonial era. Then I will analyze the industrial expansion that extended into the early 20th century before WWII. Finally, I will evaluate the post WWII Amazonia industrialization and analyze what the scientific data to show how this is important in modern society. From this we will be able to employ methods that can turn the tide against rising rates of temperature and further understand our ecosystem.

Figure 1: A group of seringalistas who run the operations of the latex extraction. These men are powerful people who are in control of the administrative side of bringing the latex out of the jungle and selling it for distribution.

The roots of industrial expansion into the Amazon has its beginnings in the rubber trade of the 1800’s and early 1900’s. With Goodyear’s breakthrough, in 1837, for the latex industry, the usage of rubber was much more suitable for various climates and was made more adaptable causing the demand for rubber to increase dramatically “rubber exports from Brazil, rose from 200 tons in 1830 to 17,000 tons in 1890.” [3] This lead to countries like America and Great Britain to attempt to find the best source of latex for the highest yield. Once the strain of Heveanative to the Amazon, was found to be the best source of latex, America and other nations began venturing into the Amazon to find dense populations of the tree for production. After areas were mapped out for the best locations of the Hevea trees American and British investors made multiple ventures to secure whatever lands they could so they can control the production of latex. They also incited the unemployed population, both foreign and domestic, to take the risk at tapping the latex which was a very tough job while there was also the risk of diseases. The demand of latex to produce rubber created a vast network of trails, camps, and trade posts as well as making a booming economy in port cities that the latex must be transported through, causing steep rises in the local population. While the rate of clearing from this venture did not last and was not to the same degree as the late 1900’s, these networks and cities laid the groundwork for a great increase in industrialized Brazil, causing deforestation on a much larger scale. 


The turn of the century was very economically productive for Brazil in relative terms; despite the Great Depression, Brazil was a land rich with natural and raw resources that many Westernized countries needed to run their economies. One of the largest resources exploited was rubber. It was increasingly valued with the rise of automobiles and then the supply saw a spike when WWII hit and the demand was urgent. In the early 1920’s, Brazil and America made a deal to make a joint venture to expand the rubber industry in the Amazon “The Brazilian Embassy in Washington is prepared to give to American rubber manufacturers any information and data desired to promote the development of the rubber industry”. [4] There were many more cooperation efforts between the US and Brazil to increase the industry and expand it into the Amazon rainforest. Once WWII was in full swing and America was directly involved, the country needed a greater supply of rubber to match its demand in the war. “The Brazilian-American rubber agreement, by which Brazil agrees to sell and the United States to purchase all surplus rubber after domestic needs are filled” [5] this was a huge step that had Brazil supporting and pushing even further into the Amazon as they were assured to sell all that was cultivated. 

Figure 2. Also known as a “rubber tapper”, a seringueiro uses only a knife and bowl to collect the raw material.

Rubber was not the only industry marching into the Amazon rainforest: cacao was also expanding rapidly. The swelling of all industries were choking all available land that could be used so the government continued to promote pushing inwards of the rainforest. “chief activities of the Institute (the Cocoa Institute of Bahia) have been to finance production and marketing.” [6] All levels of government have promoted this exploitation of natural resources, from federal to local, which puts great strain on the environment causing it to be further pushed back, losing its native lands. “So far, nothing but propaganda and advice have been given to improve methods of production and grading has been left to the local exporters.” [7] This careless focus on acquiring greater economic power for oneself and allowing indifference to lead the charge into industry has been a great cause of deforestation. If this reckless style of expansion were to continue, our so-called renewable resources would soon be wiped from their native landscapes and only be available to an industrial and automated type of production. 

The Giant Awakens: Brazil (Secondary)- Beginning in the late 1940s, the Brazilian population began booming and congregating in metropolitan areas but at such a speed that there became a shortage of necessities of urban life. First and foremost, food, the sharp increase of urban life and the lack of agriculture to supply it caused the populace to become poorly fed without the ability to do much more than survive. But there were also aspects that are much less crucial to life but important nonetheless to create an advanced country like industry on a large scale “…in summary, nearly all of the essentials needed in the type of society Brazil is attempting to build are in short supply.” [8] The country could barely supply its own people with food and only produced enough products to keep itself afloat. What follows can hardly come as a surprise when the people began pushing into the rainforest to increase agricultural area to supply the people and the country started industrial expansion to stimulate economic growth. 

Figure 3. Lands were stripped of trees and vegitation to make way for industrialized agriculture, larger cities, and road ways.

There were often times that the push into the rainforest was not so malicious and aimed at deforestation. When the capital of Brazil was moved to Brasilia, it opened up connections from the north to the south which quickly increased local populations of peasants looking for land to farm and opportunities for jobs. As the populations swelled, people were forced to sell part of their land or removed from their homes entirely which left them one option: expand into the rainforest; “some pushed further into the jungle, starting the process of clearing and eventual expulsion over again; and some… settled in the towns which sprang up along the highways.” [9] These moves were not intentional as to expand the country economically, but to unite it and have it all more easily accessible which had the unexpected outcome of this vast migration. But, despite this, the increasing numbers in the south lead to greater expansion of industrial and agricultural acquisitions of land as well as inflate the numbers of already overpopulated cites or create new ones along these highways. 

Fast forwarding to more recent threats, the governments around the Amazon promote expansion to increase their economic stance in the world. One of the greatest threats to the Amazon is government backed ventures into the rainforest for purposes of agriculture, industry, or infrastructure “evidence suggests that the Amazon forests tipping point may be reached by the year 2030… most pressure is being exerted by economic development including cattle ranching and agricultural production, and construction of road infrastructure.” [10] While agriculture can be harmful due to requiring large areas of land, industry and infrastructure can have worse effects in the long term due to inviting workers and their families to move further into the rainforest which causes more expansion and more deforestation to support it. In 2000, 12 South American countries developed the plan Initiative for the Integration of the Regional Infrastructure of South America, or IIRSA, which aims to connect the countries by road networks that will cut through the heart of the Amazon with the goal to connect the participating countries to further their economies. There are the obvious negative effects that will hurt the environment initially, like the destruction of habitats where the roads will lay and area required for construction, but also there will be local surface temperature rises on the roads themselves as well as possibly compelling people to expand along the network, increasing degradation in their wake. 

For economic, political, and sometimes social purposes, westernizing a nation is a movement that can enrapture it quickly and be extremely difficult to stop or just slow down. This systematic and strongly backed destruction of the largest rainforest on earth has many repercussions that extend across many scopes: climate change, economy, wildlife. As Brazil continues to be an overpopulated and under-funded country, there will remain desperate people who will always exploit the resources at hand to continue living or improve where they are already. Before our understanding of the extensive damages that we have done to the Amazon, people pushed deep into it to profit from its vast richness. This is evidence that more care must be taken before such irreversible steps are to be taken. Because of the point we are at in the deforestation, it would be highly unlikely to return it to its natural state as that would take life times of regrowth and the removal of many people who call it home now. But still we can learn from the past of this landscape to effectively make decisions in the future to better care for our resources. 

[1] Mooney, C. (2016, Feb 27). Finding the crucial carbon sink; rather than cuts to fossil fuels, future of earth may hinge on saving the amazon’s rainforests. The Vancouver Sun Retrieved from 

[2] The Vancouver Sun, Feb 27, 2016 

[3] Rucker, Richard. Insatiable Appetite: the US and the Ecological Degradation of the Tropical World. University of California Press. Accessed July 13, 2017. 

[4] “Rubber Agreement Welcomed in Brazil.” The New York Times (New York), May 10, 1942. Accessed July 29, 2017. 

[5] “Brazil Invites US to Develop Rubber.” The New York Times (New York), April 03, 1923. Accessed July 29, 2017. 

[6] Keithan, Elizabeth. “Cacao Industry of Brazil.” Economic Geography 15, no. 2 (April 1939): 195-204. Accessed July 30, 2017. 

[7] “Cacao Industry of Brazil”, April 1939. 

[8] Smith, T. Lynn. “The Giant Awakens: Brazil.” American Academy of Political and Social Science. Pg 102. March 1961. Accessed July 21, 2017. 

[9] Bunker, Stephen G. “Forces of Destruction in Amazonia.” Environment22, no. 7 (September 01, 1980): 14-43. Accessed July 30, 2017. 

[10] Dijck, Pitou Van. The Impact of the IIRSA Road Infrastructure Programme on Amazonia. Taylor and Francis. March 05, 13. Accessed July 13, 2017. 


Figure 1. Circa 1920, group of seringalistas.
Figure 2. 1966, a seringueiro extracts latex.
Figure 3. Livestock pictured in front of freshly cleared forests.

Great Depression Impact British India

Download PDF


The 1929 great depression had severe impacts on British India. The British India government adopted a protective policy trade which caused a great damage to Indian economy although was beneficial to United Kingdom. Throughout the period of 1929-1937, imports and exports fall extremely crippling seaborne international trade as well as the unemployment rate in the United States and Great Britain. The paper focuses on a lesson the 21st century generation learn from the 1929 great depression and what it does reveal about our future?

Discussion: Great Depression Impact British India

The great depression had an impact on the Indian government policies which resulted to widespread protest in the entire country. As the struggle of the nation deepened, the Indian government approved a number of nationalists’ economic demands such as establishing a central bank. Consequently, Reserve Bank of India Act was passed and in 1935, the central bank of India came into being. Business in India was a hyperbole of segregation – there was an uncoordinated and far-dispersed strategy of doing business and promoting commerce in India. Perhaps the most implicative proof was on the period of creation of an organizational body that protected commercial and industrial activities– it was only established in 1929, upon realization of the effects of the increasing economic instability due to the Great Depression. Figure 1; the changes of due to the Great Depression led to crashes in New York as the stock markets were facing heavy trade offs.

Although the overall standpoint of India’s frugality seemed to be heavily jeopardized by the self-centered rule of the British – whose prime focus was on strengthening London’s financial stability – British India momentously soared economically in the years leading up to the war of the late 1930s. This was primarily due to an aggravated and laboriously fuelled focus on improving the anti-imperialist-designated manufacturing sector, which then created a set off for post-war industrial leverage. Trade balancing was, during the Great Depression, a question of incentive between the British Raj and British India owing to the unprecedented overturns that happened in the trade market advantage for both dynasties. Initially, trade stability had stagnated in favor of the British Raj in the earlier years during the Depression, but British India regained trade in their favor in the latter years (1936/37 and beyond).

India, being the at the helm of nations that suffered badly as a result of Great Depression, the fall in price commodity was a bit higher as compared to importations from the United Kingdom. Farmers had shifted in large numbers from food crops farming to cash crops farming in order to meet the high demand of mills in United Kingdom [1]. They were now crippled since they were not able to sell their commodities in India as a result of high prices. Nevertheless, they could not also export products to United Kingdom since the nation had already adopted a protective policy which prohibited Indian imports.

There was deficiency of money all over India which caused widespread poverty. This was caused by cash crops such as wheat and rice cultivated, and could not be used for private consumption [2]. Since there were limited exports and little sale of indigenous manufacturers, their products accumulated and cash flow was restricted.

Figure 1: 1929 Millbury bank patrions try to recover what little money they can

A conference convened by member countries of the British Empire earmarked the worsening of the economic status of India, especially targeting the Indian entrepreneurs and commercialists. In the meeting, named the Imperial Economic Conference, it was passed that an admittance would be sanctioned to have some of the commodities shipped from Britain admitted into Indian borders free of duty, staking between 7.5 and 10% more duty evasion for Britain than for India. The conference, held in Ottawa, spurred hope for most Indians as it was expected that the more pressing matter of discussion, which was about the currency policy would make the main agenda, but this was not even mentioned, let alone addressed [3]. Knight points out that the 1932 Ottawa conference brewed strong discontent, prompting businessmen in British India to turn to Congress for financial equity owing to the lacking “point of pressure” with the Indian government.

The fall in income in British India did not necessarily occur because of a fall in production or total output, but it was mainly because of the looming fall in commodity prices. For agricultural crops produced in India, the price dropped from approximately Rs. 1021 crores in the 1928-29 periods, to a lowly Rs. 474 crores circa 1933-34 – this induced a worrisome burden of debts at the regional and national levels. It had been reported that the debt of the agriculturalist in India was Rs. 900 crores in the year 1929, according to the Royal Commission on Agriculture. This debt increased by about 50p.c. over the next two years [4]. The scourge of currency depression was banished about 4 years later in 1938 leading up to the end of the Depression in 1939.

The financial policy posed by the British on the Indian economy was one more of the British beneficial tools aimed at disadvantaging the Indian economy at the gains of the British Empire. This policy was created to give the British currency some edge over the Indian Rupee through a high exchange rate. Rothermund holds that this policy agreement had already been created even before the great depression at a time when the Indian Rupee was confined to the gold standard in which case it had a fixed exchange rate at any level (660) [5]. The most definitive move by the Indian government was to contract its currency from Rs. 185 crore to around Rs. 148 in 1931 (661) [5].The effects of the dwindling currency were perceived countrywide and the economic stability of the country during this time was at jeopardy. Efforts of activists such as Jawaharlal Nehru were denied fruition as the British frontier was incredibly strong and rigid.

Simmons posts that one of the aftermaths of the Great depression especially in the third world nations was economic slowdown. In India particularly, there was an imminent collapse in the levels of occurrence of effective “growthmanship” i.e. the lack of proper strategizing needed to effect a raise in the per capita income and product as a lead towards sustainable and consistent economic development (589) [6]. According to Simmons, there is, indeed, evidence of economic deteriorations occurring in the pre-independence era (including the Great Depression) and the period amidst war, and these were in the form of a drop in product exports as well as adversities associated to trade movements (590). The scourge of dropping commodity prices in the late 20s and early 30s was associated to the discourse of despotism, as was highlighted by other scholars such as Sir Arthur Lewis and John Latham. Simmons reasserts that the effects of economic dissension in India led to the impactful pushing of the several financial policies as well as preambles meant to wage revolt politics in India – Quit India notwithstanding (589).

In the proceedings from the Ottawa Conference in 1932, Indian government held that even though the country (India) was prepared to welcome any plans that would lead to sustainable development, India was not prepared to embrace any tariff impositions from the British Empire. Speaking at the conference on behalf of India’s governance was Sir Padamji P. Ginwala, previously president of the Indian Tariff Board.

Figure 2: sketch of the round table conference holding led by Sir C. P. Ramaswami Iyer. 1932

The conference had been attended by select members of the tariff board of India, alongside other representatives from the four countries to have been discussed in the conference that were under the British rule. The previous year, it was written that the national government of India had abandoned the Free Trade Agreement, but that in this conference, it was passed that India had agreed to get into a Trade Agreement with the UK in August of 1932 [7]. The author of this paper was an economic in national matters in India and at the forefront for addressing sustainable economic development in British India. The underlying assumption here was that there had not been adversities in the agricultural sector in India at the time of signature of the said agreement. The adversity of the Great Depression intensified greatly, such that by 1933, over one third of the mills in Bombay alone had been rendered defunct, a situation that prompted farmers to strike a contract with Lancashire, it was dabbed the Lees-Mody Pact. Inasmuch as the beneficial nature of this pact was split between India and the British Raj in the latter’s favor, regions like Ahmadabad failed to approve of the pact on the basis of its fascistic nature. No substantial gains were realized by British India upon signature of this pact, as the British had no particular interests in the Indian business. Figure 2: select committee members conjoined and met for the round table conference to discuss on the adversities of sectors during the Great Depression.

On August 1942, a statement was published in the New York Times as released by the government of India and forwarded to the All-India Congress Working Committee in April of 1942. This was one of two “Quit India” resolution drafts that had been brought forward with the intent of garnering approval from the committee. Quit India was a resolution first enacted in August 8 of 1942 as a call for the immediate termination of the British rule on India for the sake of the nation’s prosperity, and for the allowance of successful United Nations endeavors. In the draft, the working committee disdains the emperorship of the British while bringing forward the point that it was not at war with Japan as it had been apparent in the years leading up to 1942. It was also a sympathetic request to the British to let go of India’s government, and an appeal to the Japanese government to brush off the impression that they were at war with India. The context of the text was a peace entreaty extended to the Japanese government by India, and also an appeal to the British Empire to abandon all its efforts on governing India [8]. The reviewed draft was written by Mohandas K. Gandhi, insurgency frontrunner and leader of the Indian Independence Movement, a nonconformist group created to rebel against the British rule. He was known to be aggressive in rebellion of the British rule and up to this day, he remains a historical figure for his efforts. The source operates on the assumption that the expanse and severity of the British rule in India had subsided enough for India to be in a position to contest for its rights as a country.

Figure 3: Mahatma Gandhi led the Quit India Movement asking for the termination of the British rule in India. Bombay 1942

Knight extrapolates two lessons that can be deduced from the Great Depression in India – first is the outcomes of the British rule on India in the wake of the Great Depression. Indian governance functioned under the idea that it was the British rule that amplified the effect of the Great Depression, hence anti movements, including the Quit India resolution. Although the effect of the Great Depression could not have been directly felt in India were it not for British existence, the economic gamut of British India could not have performed much either. Secondly, the price that British Empire had to pay ahead of the looming Second World War was indispensable. Knight ascertains that the British Army, as at 1938, “was unfit to take the field against land or air forces equipped with up-to-date weapons” The resort was to grant retraining to its army and expand its army size as was recommended by the Chatfield Committee in the year 1939 [9].

The great depression of British India was a major epidemic of the year 1929 written in the books of history as it led to the Quit India resolution which requested British rule to relinquish the colonialism and allow India stand out on its own. British rule portrayed an amplifying effect of the Great depression instead of alleviating the effect through the protective policy benefiting the United Kingdom at the expense of India. (10)The British policy tools attempting to give the British Currency some edge over the Indian Rupee caused more havoc during the Great Depression period affecting the Indian economy leading to converging of the Ottawa Conference in discussion for the way forward of the Indian economy, since it was on the verge of collapsing. Drafting of the statement Quit India was made in the Ottawa conference and awaited All-India Congress Working Committee to approve as a call for termination of the British Rule in India for the sake of the nation’s prosperity. The great depression gives out a wakeup call to watch out during policy making since some policies interfering with the economy may lead to collapse of a country’s economy eventually.



[1] Trader, “Black Thursday: Stock market crash causes chaos and panic in 1929,” New York        Daily News, October 23, 2015,  market-crash-caos-panic-1929-article-1.2400089.

[2] Trader, October, 2015.

[3] Manikumar, K. A. A Colonial Economy in the Great Depression, Madras (1929-1937).(Orient Blackswan, 2003), pp. 163

[4] Singh, Kanti. The great depression and agrarian economy: a study of an underdeveloped region of India. (Mittal Publications, 1987), pp. 32

[5]Rothermund, Dietmar. “THE IMPACT OF THE GREAT DEPRESSION ON INDIA IN THE 1930s.” Proceedings of the Indian History Congress 41 (1980): 657-669

[6]Simmons, Colin. “The great depression and Indian industry: Changing interpretations and  changing perceptions.” Modern Asian Studies 21, no. 3 (1987): 585-623.

[7]Ginwala, Padamji P. “India and the Ottawa Conference.” Journal of the Royal Society of Arts    81, no. 4175 (1932): 41-58.


[9] Knight, Lionel. Britain in India, 1858–1947. Anthem Press, 2012.

(10) Foster, J. B., & Magdoff, F. (2009). The great financial crisis: Causes and consequences. NYU Press.


Search terms: (great depression) nation⃰, indigenous, import⃰ export⃰.

NOTE: ⃰The asterisk found at the end of the stump set of letters brings in multiple endings to the root word.


Figure1 1929

Figure2 1932

Figure 3 bombay 1942


Women in the sporting world: A long-lasting fight for worth that isn’t over yet

Download PDF

Introduction: A Gendered Issue

The 2012 Olympics in London were the first to showcase women in every sport and the first time that every nation involved sent women to compete [1], but modern female athletes still face some of the same inequalities they have been facing for several years. The first modern Games in Athens in 1896 did not include female athletes at all, as founder Pierre de Coubertin felt the inclusion of women would be “impractical, uninteresting, unaesthetic and incorrect” [2]. Still, in the 21st century there have been arguments against allowing female athletes to compete, with women’s pole vault not added to the Summer Olympics until 2000 and ski jumping only being added to the Winter Olympics program in 2014 [3]. Women’s boxing was also excluded from the Summer Games until 2012, and even with the addition of the sport, the boxers had to fight to not have to wear skirts during their competitions [4]. A Swiss doctor in the 1950s advised the International Olympic Committee against these and other sports for women, and the antiquated claims against women’s abilities was still being upheld by the International Ski Foundation as recently as 2005 [5]. Even as the athletic women of the world gain international competition rights with several national teams signing their first full-time female athletes, male athletes are still significantly ahead where it counts. As of 2016, women’s sport still only receives about 5% of media coverage and less than 1% of corporate sponsorship money [6].

The changes in public view regarding women’s recreation have often, and not always for the better, changed women’s ability to take part in sports throughout time. From ancient times when female athletes were sought out for entertainment but still held out of the ancient Greek Olympics, to the 1800s when skewed science and public view became a halting factor to women’s sports altogether, and the 1900s when women athletes were once again on the upside of change. Gaining access to the modern Olympics in the 1920s was a huge turning point, but it wasn’t the end of the fight as competing women have continued to face scrutiny and little support, in both public and financial aspects, even today.

Ancient Views

In Ancient China, described by Michael Speak as primitive society to AD 960, it became common place for women to take part in the recreational activities of the time period. Pictographs on bones and tortoiseshells created during the Shang Dynasty (1600-1046 BCE) indicated that women took part in swimming, boating and fishing, as well as singing and dancing for festival purposes. The empress dowager of Emperor Su Zong of the Northern Wei period (AD 386-534) was a very strong archer. She held and participated in competitions among both military and civil officials. There is record from AD 821 of a polo match played by imperial maids including Wu Xetian who later became the 6th monarch of the Tang Dynasty. It was generally accepted and encouraged for women to partake in these activities and others, especially in royal circles [7]. The Chinese Middle Ages saw the development of ancient forms of golf and soccer, as well as an increase in combat-style sport to support military training; wrestling became extremely popular and women put on public performances to display their skills in the sport. The daughter of a Mongol king, Aiyaruk, was extremely strong in combat and refused marriage until a man could beat her in combat and gained 100 horses from each man she defeated. It is said that she gained more than 10,000 horses in her time and never married [8].

On the other hand, there were never any women’s events in the ancient Olympics. However, there were listed female victors, especially in equestrian events because the victory went to the stable owners instead of the jockeys or charioteers. Especially notable is the case of Kyniska of Sparta, who won the four-horse chariot

Figure 1. Woman on chariot assumed to be Kyniska of Sparta.

race in 396 BCE. She commissioned and dedicated a statue to her victory in Olympia, and when she entered and won again in 392 BCE the Spartans dedicated another statue to her. In later centuries as well there is evidence of a competition for teenage girls happening every 4 years under the name Heraia. The festival involved several foot races among 3 age groups, on the Olympic stadium track shortened by a sixth of the length, and the winners received an olive crown the same as Olympic victors. In Sparta, a well-known military state, there was compulsory physical training for teenage girls in more than just foot races, said to include javelin and discus throwing as well as wrestling. There also exists evidence that although women did not compete in the Olympic games, they competed in 3 other international competition stages: the Pythian, Isthmian, and Sebasteian Games. Victories at these competitions were also taken very seriously as shown by a statue dedicated by a father to his three daughters who took several victories at these Games [9].

Developments in the 19th Century

The Victorian era was ruled by cultural differences between the sexes. Men were characterized as naturally aggressive and competitive, while women were seen as inherently emotional and passive, especially with respect to abilities in physical activity. Women were considered unfit for sports and physical activity due to their apparent natures, which was confirmed by the fact that men were usually seen playing sports while women were not. There was science and medicine constructed that supported this idea, leading to the stereotypical restricting clothing as well as women often eating little and not exercising. The 19th century saw a shift in medical perspective towards the benefits of moderate exercise to women’s health and their ability to bear healthy children, developing medical gymnastics and massage as established treatments and suitable exercises as early as the 1830s. Later in the century it became commonplace for women to take part in low-energy recreation such as croquet and gentle forms of tennis and badminton, as these were considered family-centered activities. These games were a way for women to display how cultured they were to potential suitors and others in their social classes. The social conventions in place did little to stop women from enjoying these sports with higher energy as time went on [10].

A major step in the development of recreation for women came with the development of higher education for women, as well as lower education that was more similar to that already available to boys. From the late 1860s several girls’ schools were opened and all included some form of physical education in their curricula. Another shift in medical opinion led to support of more energetic forms of exercise and a push for mandatory physical education [11]. Organized outdoor games during lunch breaks became commonplace in elite girls’ schools. One particularly notable system was at the North London Collegiate School where Elizabeth Garrett Anderson,

Figure 2. ‘Musical Gymnastics’ at the North London Collegiate School for Girls

a governor of the school, encouraged the headmistress to lengthen the lunch break to accommodate the games. By 1885 the games club was a regular feature, and girls were able to participate in ninepins, badminton, fives, battledore, and shuttlecock, which expanded by the end of the 1890s to include hockey, netball, and tennis. These games were all included in the school’s Sports Day which became an annual event. The late 1800s also saw the appointment of Miss Concordia Löfving from Sweden as the ‘Lady Superintendent of Physical Education’ in girls’ and infants’ schools in 1879 [12]. Many of the first sport clubs found their start in the curriculum of girl’s schools as teachers of physical education often set up inter-school, local, and regional competitions, and the majority of athletes comprising the first national teams were from the schools [13].

Girls and women participated in more various and rigorous forms of sport and exercise throughout the 1800s. It became more socially accepted as time went on, especially with the model of Queen Victoria as a powerful woman both on the throne and in her private life. The Sportswoman’s Library at the time set Victoria as an example which pushed forward the love of sport among women [14]. Archery was a popular sport for aristocratic women from the beginning of the century, being supported by the men’s Grand National Archery Association, and several other sports gained national clubs. Some examples of this were public swimming facilities for women as early as 1858, the All-England Croquet Club was formed in 1867 that allowed women to take part in organized competitions, the Cyclists Touring Club admitted women members as early as 1880, along with the first Ladies’ Punting Championship in the late 1880s, the first private women’s hockey club forming in 1887, and the Ladies Golf Union forming in 1890. Other sports didn’t gain the female counterparts to their governing bodies until the turn of the century or later, such as badminton (1900), competitive skating (1906), lacrosse (1912), rounders (1923), netball (1926), and cricket (1926) [15]. However, growth of the period was not without criticisms of women’s sports in general. Critics were afraid that game-playing would cause irreparable physical damage and hormonal imbalances. Even at the girls’ schools where sports were common, the physical activities of the students were mostly private and contained within the school [16].

The development of women’s teams and leagues also did not guarantee accessibility to female athletes. Men’s teams often held priority when it came to resources, and men were most often the ones in positions of authority and control within leagues. Women’s teams usually got less time in facilities, with either less or poorer-quality equipment, poorer coaching, and less funding. Female athletes also faced social contestation; the threat of harassment and being barred from competitions even under the jurisdiction of their own leagues was very real in the early 1900s, and it was usually the public idea of what was ladylike that gave men the grounds to not allow women competitors [17]. It was and has continued to be upheld that sports are “masculinizing.” On the rare cases where males and females have been allowed to play together, it has often been reversed and disallowed once the girls started doing better than the boys. Jackie Mitchell at 17 years old signed a minor league baseball contract, and in an exhibition game struck out legends Babe Ruth and Lou Gehrig. This resulted in her contract being voided by the commissioner, and women being barred from baseball altogether on the basis of it being too strenuous [18]. The public became overall more accepting and supportive of women’s sports in the period between World War 1 and 2, and although women were the main proponents for the development of women’s leagues and clubs, it was often the support of men and men’s clubs that perpetuated the women’s clubs and gave them the push into the public eye that they needed [19].

Accessing the Modern Olympics

The suggestion of reviving the Olympic Games in 1894 saw women’s sports take a step back as well. The founder of the Modern Olympics, Pierre de Coubertin, was strongly opposed to women competing in the Games. De Coubertin held the belief that physical activity and discipline made young males feel happy and free, and it was to men that he particularly catered. He praised the values of sportsmanship and ‘gentlemanly conduct,’ and compared these values to the chivalry of knights. He believed that women competing in sports, and specifically in the Games, would be improper. Women did however compete in the Olympics at various points throughout his lifetime, and increasingly after his death, particularly in events that test strength and stamina which he consistently believed to be inappropriate for women [20]. The members of the International Olympic Committee (IOC) unanimously agreed with de Coubertin to not allow women in the first Games in 1896, at least in part because de Coubertin was the sole benefactor of the IOC and its president. However, there were other men in the Olympic movement and in the International Federation of sport, involved in development of women’s sports in their own countries, that supported women’s fights for Olympic recognition. In 1900 and 1904 the IOC handed the responsibilities of arrangements of the Olympics to the committees of the host cities, Paris and St. Louis, and it was under these committees that women were able to participate for the first time. The British Olympic Association, who took charge of the 1908 Olympics in London, included women’s archery, lawn tennis, and figure-skating in the official program. There was a reactionary withdraw of these competitions however in 1912 when the IOC blocked all but a few ‘feminine-appropriate’ events, which were not given the same status as men’s competitions until 1924 [21].

Figure 3. Alice Milliat

The change was in part thanks to Alice Milliat, who had founded a French women’s sports organization through which she challenged the IOC’s ruling to prevent women from competing in the Olympics. De Coubertin held onto his original ideologies, but the majority of the IOC went on to vote to allow tennis and swimming competitions in the 1920 Games. After this, a group of women from Europe and the Americas arranged an athletic competition in Monte Carlo in 1921 which included a range of track and field events. This group was the beginning of the Federation Sportive Feminine Internationale (FSFI) which helped to accelerate the development of international women’s athletic competition by organizing its own alternative International Games, the Women’s Olympics, which was later changed to the Women’s World Games [22]. The first Games held in Paris saw 77 women compete in several runs, hurdles, long jump, high jump, shot put, and javelin events [23]. By the last of the Women’s World Games in 1934 held in London, 19 countries had become involved and sent female athletes to compete. Female competitions were again refused from the Olympics in 1924, backed up by a “scientific” report detailing the outdated differences between men and women and their ability to participate in athletics. However, the all-male International Amateur Athletic Foundation (IAAF) agreeing to take control of women’s events for the Olympic program meant women’s sports were finally acknowledged as eligible for Olympic competition. In 1926, after the retirement of de Coubertin from presidency of the IOC, the IAAF officially recommended a women’s program for the 1928 games that included 5 events. The Women’s World Games had included 11 events and the acceptance of the offer to move into the Olympics was not taken lightly. The British women athletes boycotted the 1928 Games on account of the drop in the number of competitions [24].

It was reported in January of 1929 that a vote had been introduced at the fifth annual meeting of the Women’s Division of the National Amateur Athletic Federation (NAAF) to not have American women compete in the 1932 Olympics. Miss Ethel Perrin, chairman of the executive committee and staff associate of the American Child Health Association, led the opposition on the basis of specialization required of the few athletes selected, the opportunity for exploitation of the athletes, and the setting and breaking of records taking away from “play for play’s sake” [25]. The resolution was passed, with the Women’s Division of the NAAF deciding to instead hold a festival at the same time as the Olympics with opportunities for women to partake in more varied but less strenuous activities [26]. Women did still compete in the 1932 Olympics, however, and a new world record was set in every women’s event. The 1936 Games still only saw 328 female athletes among 4 sports, as opposed to 11 sports for 3,738 men [27].

The Games has seen slow but steady progress in the addition of new women’s events. After the removal of the women’s 800-meter race after the 1928 Olympics, it was not reinstated to the program until 1960. The IOC tried (and failed) to remove the shot put and discus events from the Olympic program in 1966, but notably was supported by the Women’s Board of the US Olympic Development Committee on the grounds of the sports skewing the feminine self-image. The women’s 400-meter was introduced in 1964, the 1500-meter in 1972, and the 3000-meter in 1984. Two exclusively-female sports, rhythmic gymnastics and synchronized swimming, were also added to the 1984 games. It was not until the 1988 Olympics that the number of men’s and women’s events was somewhat equalized, with 26 sports across 165 events for men compared to 22 sports across 83 events for women. Although a disparity still existed, it was a significant step from the 1984 Games which held 168 all-male events, 73 all-female events, and 15 mixed events [28].

Modern Challenges and Success

Gaining access to the Modern Olympics was an important step for female athletes towards gaining recognition and appreciation, but many challenges still face the sporting women of the modern world. Female athletes have endured a series of sex/gender testing by the IOC since the 1964 Olympics. The tests were initiated by the IAAF to prevent cheating, under the assumption that men would try to join women’s competitions to gain an advantage. This assumption was flawed in itself as it mirrored the social perception of female athletic events as easier than male athletics. In the beginning the test consisted of competitors appearing naked in front of a panel who visually confirmed their sex. The 1960s led the IAAF and IOC towards science and medicine to verify athletic bodies, testing all athletes for drugs but only the females for confirmation of sex/gender.

Figure 4. Caster Semenya at the 2012 Summer Olympic

The process did change to a genetic test, examining an individual’s DNA for the testis-determining SRY gene, although this provided its own set of issues as an athlete could be deemed unable to compete based on an unknown genetic variation. These tests all added to the pressure of the Olympics for female competitors because along with preparing for their events, they also had to mentally prepare themselves for the tests they would have to undergo before even competing, while male competitors did not have to face this mental challenge. The Olympics were not free of mandatory sex and gender tests until 2000, and the IOC still holds the right to test individuals on a case-by-case basis [29]. One of those cases included Caster Semenya, a South African female runner who was subjected to sex testing by the IAAF after winning the 800 meters in the 2009 World Championship in Athletics. Semenya is described as hyperandrogenous, meaning her body naturally produces testosterone in the “male range,” but despite the mental gymnastics with the sex testing she went on to win silver at the 2012 Olympics and gold at the 2016 Olympics [30].

Although a primarily American issue, the induction of Title IX has both fixed and caused issues for female sports. The federal law banning sex discrimination in federally funded education programs was enacted in 1972 as mainly an amendment focusing on education. However, the purpose of the bill was spread to include sports upon issuing the final regulations in 1975 and a “separate but equal” approach was held in order to be able to gauge progress. It was not without a fight in Congress, backed by the NCAA on the grounds that it could take away from male sports, that Title IX passed and went into effect in the United States [31]. The act adopted a “three-part test” regarding equality of opportunity between female and male sports. Satisfying just one of the three parts of the test may show compliance to the act: a school must provide opportunities for male and female students in relative proportion to their enrollments, show a history and continuing practice of expanding opportunities in response to developing interests and abilities of the underrepresented sex, or demonstrate that those interests and abilities have been accommodated by the school’s existing program. The sports offered at a school do not necessarily have to be the same for men and women, i.e. there does not necessarily need to be a women’s football team if there is not female interest, but the opportunities for both sexes have to be proportionate to interest and population. Overall the test is a way to gauge and promote the developing interests and abilities of women in sports [32].

There is also still an overwhelmingly social aspect to modern women playing sports. Women who participate in more masculine sports such as wrestling, boxing, hockey, football, and rugby are often seen as clashing with the gender ideologies in society. Since the public ideas of women competing in sports has been challenged more, there has been an increase in women’s interest in sports including contact sports. More women are playing contact sports than ever before, and the resistance to women’s participation has lessened at the same time [33]. However, there are still challenges in the social perception of women playing typically-masculine sports as well. There have been suggestions that more masculine sports have attracted more women who do not identify as heterosexual because the characteristics of the games are explicitly opposite of the feminine-appropriate sports of the late-18th and early-19th centuries. Women playing these sports are sometimes seen as less feminine, and a derogatory and homophobic stance is sometimes taken against their participation [34].

Even though women’s sports teams have begun issuing professional contracts to their athletes, it is not all smooth sailing on that front either. The Women’s Rugby World Cup will be starting up in Ireland in August, and the English Rugby Football Union have announced that they will not be renewing the contracts for their 15s women’s players after the Cup. The focus will change instead to supporting the 7s side, ending 16 full-time and 16 part-time contracts of 15s players [35]. In July of 2016, the RFU was the first union to award contracts to 15s players, marking a huge step forward in the women’s games, and the short term of the contracts has been questioned by several Members of Parliament. The RFU’s Director of Professional Rugby claimed that the women’s game works in “cycles between the 15s and 7s,” so the next year will see 17 professional full-time contracts to 7s players, following other countries’ contracts for their 7s teams. The changes will reverse ahead of the 2021 Women’s World Cup to help athletes prepare for that competition the same as it did for this year’s Cup [36]. This cycle is not seen in the men’s competitions, leading to the debate of just how much the RFU actually supports its women’s side and the growth of the women’s game. England’s women’s rugby team is hoping to follow the wave of success in English women’s sports as the Soccer team is competing at the Euro 2017 Championship, the Cricket team won the World Cup, and an English woman made it to the Wimbledon semi finals for the first time since 1977 [37]. Tonia Antoniazzi, a Member of Parliament and a former Wales women’s rugby international player, considers the public announcement just two weeks before the World Cup to be a threat to morale, even if the team knew back in April that their contracts would not exist after the competition. Antoniazzi believes that the issue “highlights a massive inequality for women in Britain” [38].

Despite challenges, sporting women have also seen great progress. Through the formation of several female sports leagues and the public perception shifting towards a more favorable standing, modern women are participating in more sports with many different characteristics. In the UK for example, the Women’s Rugby Football Union has seen a growth from 12 members at its formation in 1983, to 2000 women playing each week just ten years later across a total of 142 teams during the 1992 season. Though there are those who will go out of their way to insult the women who play more intense games like rugby, the women who play it go out of their way to speak about how satisfying and thrilling their experiences are. The male counterparts of these traditionally-male sports have also taken to supporting their female leagues and players, dedicating specific resources towards developing the women’s games. Weightlifting has seen huge growth since the inaugural British Women’s Weight-lifting Championships in 1986. Bulgarian women have become extremely distinguished lifters and there is an estimated 3 million women lifters in China [39]. Also, though seen by some as a “quota law” for opportunities for women competing in sports, the enactment of Title IX has pushed the number of high school girls competing in interscholastic sports in the U.S. from 300,000 in 1971 to over 3 million in 2010. Collegiate level female sports have also seen a rise from less than 32,000 in 1971 to more than 200,000 in 2010 [40].

Conclusion: How Much Has Really Changed?

Since ancient times, public perception has greatly impacted and shaped the participation of women in sports. There have been changes in opportunities and public support as women have gained access to more sports, in more places and at more levels, including those sports that were traditionally considered masculine. The “science” used by mid-century physicians has been for the most part left behind, and female athletes are generally able to participate in any sport they choose. There is occasionally still a threat of public backlash, but it is becoming less of a societal norm as more people are willing to support female athletes and call out sexism in the sporting world. Although it has not always been steady, progress has been made towards women gaining more support and recognition in their recreation. The financial aspects have begun to catch up, but the rugby contract issues that have arisen, as well as the disparity between the national players and those of their hometown teams, show that there is still a difference in support for men’s and women’s sports and still progress to be made.



[1] “Sporting chance; In dress, sponsorship and travel, gender equality still eludes Olympic women,” Calgary Herald, August 1, 2016, (Accessed July 1, 2017).

[2] Calgary Herald, August 1, 2016.

[3] Anna Kessel, “’We’ve come a long way but the job isn’t finished’: In its 30 years the Women’s Sport and Fitness Foundation has fought many battles but, as its founders and luminaries recall, progress has never been easy,” The Guardian, October 25, 2014, (Accessed July 1, 2017).

[4] Calgary Herald, August 1, 2016.

[5] The Guardian, October 25, 2014.

[6] Susan Egelstaff, “Equality in sport still work in progress,” Sunday Herald, October 2, 2016, (Accessed July 1, 2017).

[7] Michael Speak, “Recreation and sport in Ancient China: Primitive society to AD 960,” Sport and Physical Education in China, ed. Robin Jones and James Riordan (London: E & FN Spon, 1999), 20-44, accessed July 2017,

[8] Michael Speak, “The emergence of modern sport: 960-1840,” Sport and Physical Education in China, ed. Robin Jones and James Riordan (London: E & FN Spon, 1999), 45-69, accessed July 2017,

[9] David C. Young, “Women and Greek Athletics,” in A Brief History of the Olympic Games (Oxford: Blackwell Publishing Ltd, 2004), 113-21, accessed July 2017,

[10] Jennifer Hargreaves, Sporting Females: Critical Issues in the History and Sociology of Women’s Sport (London: Routledge, 1994), 43-55, accessed July 2017,

[11] Hargreaves, Sporting Females, 55-58.

[12] Hargreaves, Sporting Females, 63-69.

[13] Hargreaves, Sporting Females, 61.

[14] Frances E. Slaughter, ed., The Sportswoman’s Library, (Westminster: Archibald Constable, 1898), 8.*

^This historical source was cited in another book; the page that author included was page 8 but I haven’t been able to locate a copy of the book to confirm.

[15] Hargreaves, Sporting Females, 88-102.

[16] Hargreaves, Sporting Females, 83-85.

[17] Hargreaves, Sporting Females, 125.

[18] Deborah L. Brake, Getting in the Game: Title IX and the Women’s Sport Revolution (New York: NYU Press, 2010), 29-30, accessed July 2017,

[19] Hargreaves, Sporting Females, 126-130.

[20] Lincoln Allison, “The ideals of the founding father,” in Watching the Olympics: Politics, Power and Representation, ed. John Sugden and Alan Tomlinson (London and New York: Routledge, 2012), 18-35, accessed July 2017,

[21] Hargreaves, Sporting Females, 209-10.

[22] Hargreaves, Sporting Females, 211.

[23] “Women Athletes Ready for Pistol,” New York Times, August 20, 1922, 24. Accessed July 13, 2017

[24] Hargreaves, Sporting Females, 211-14.

[25] “Would Bar Women from the Olympics,” New York Times, January 4, 1929, 24. Accessed July 13, 2017

[26] “Non-Olympic Rule Adopted by Women,” New York Times, January 6, 1929, 191. Accessed July 15, 2017

[27] Hargreaves, Sporting Females, 214.

[28] Hargreaves, Sporting Females, 216-18.

[29] Jayne Caudwell, “Sex watch: surveying women’s sexed and gendered bodies at the Olympics,” in Watching the Olympics: Politics, Power and Representation, ed. John Sugden and Alan Tomlinson (London and New York: Routledge, 2012), 151-64, accessed July 2017,

[30] “Intersex athletes: A showdown for rights – but whose rights?” Deseret News, July 14, 2017, (Accessed July 26, 2017).

[31] Brake, Getting in the Game, 15-21.

[32] Brake, Getting in the Game, 69-70.

[33] Brake, Getting in the Game, 106-11.

[34] Hargreaves, Sporting Females, 253.

[35] “Dark Day for Women’s Sport,” Sport for Business, July 25, 2017, (Accessed July 25, 2017).

[36] Kate Rowan, “Tonia Antoniazzi MP says RFU contract snub a ‘kick in the teeth’ for women’s rugby and equality,” The Telegraph, July 24, 2017, (Accessed July 25, 2017).

[37] Sport for Business, July 25, 2017.

[38] The Telegraph, July 24, 2017.

[39] Hargreaves, Sporting Females, 273-74.

[40] Brake, Getting in the Game, 67.


Figure 1. Pottery illustration of woman on chariot, unknown date,

Figure 2. ‘Musical Gymnastics’ likely to be held at the NLCS’s second home from 1870 at 202 Camden Road, London, unknown date,

Figure 3. Alice Milliat, unknown date,

Figure 4. Caster Semenya, médaille d’argent aux 800m aux JO de Londres, August 11, 2012,

Bombs for Peace: United States Arms Sales in the Middle East.

Download PDF

The United States is the largest arms dealer in the world totaling $40 Billion in weapons in 2015. The biggest buyers are those in the developing nations in and around the Persian Gulf such as Egypt, Qatar, Saudi Arabia, Israel and Iraq. The country that leads in both military assertiveness and arms buying has been Saudi Arabia. According to the article in The Guardian, Saudi Arabia and other Sunni States have been on a buying spree seeing a surge from this year of $18 billion in purchases up from $12 billion last year. The article describes the types of weapons that include defensive and offensive systems such as jet fighters, missiles, armored vehicles, drones and helicopters [1]. These record weapons sales come at a time when Gulf States are locked in primarily religious wars against neighboring countries such as Saudi Arabia and Yemen and even more destabilizing is the Syrian conflict that has set Iran and Gulf states against each other. In previous years, the arms were purchased with an intent toward defense and deterrence. It is obvious that Middle Eastern countries are now more than ever willing to use their weapons as a show of force. The article goes on to note that these “interventions” are generally airstrikes whose intended purpose is showing military force over its political rivals and paying no attention to averting humanitarian disaster or considering non-violent conflict-resolution [2]. What lead to the fragility of Gulf States in a post WWII power vacuum? Can a study of historical arms injections into countries locked in regional conflict tell us anything about the harmful effects of further destabilization in the Middle East? What political standard did Cold War era politics play in arms distributions from both the US and Russia in the region? Does US history with Iran and Iraq shed light on a bigger problem in terms of who America backs militarily? Has the sovereignty of Gulf States been so disrupted by Western powers that fundamentalists are the only ones capable of taking power in the region?

The militarization of the Middle East by western global powers are the ultimate cause for the instability and spread of terrorism throughout the world. As post-colonialism took shape in the Middle East following World War II, Arab countries struggled to find an identity while being intimidated and influenced by those same colonial powers that they resisted. The United States has served as the largest supplier of arms in the region making it the largest instigator of violence, bloodshed and genocide. As the US preaches peace and justice for all, it hypocritically supplies the same nations it vilifies with billions of dollars’ worth of military aid, ammunition, chemical weapons and artillery after a series of installed puppet governments, political rivalries and an American political obsession with the threat of communism spreading in the region. Stances on military aid as developed by Nixon has created an environment ripe for injecting massive amounts of arms to countries that align with our interests and further contributing to global terrorism.

Figure 1: The 1917 Sykes-Pinot Agreement designed “artificial borders” by the colonial powers in order to maintain their interest and created tensions and disputes that last today.

As history enters the 20th century, most all Middle East states are governed by the colonialist powers that rule them. Arab states and their futures are at the mercy of secret agreements and negotiations all revolving around British, French and Russian interests in the region. As shown in Figure 1, the Sykes-Picot Agreement of 1916 effectively divided the Ottoman Arab provinces into regions that would be split up and would be under colonial control and influence that can be thought of as the roots of post-colonialist conflict in the present-day and has been quoted by the Islamic State of Iraq and the Levant (ISIL) that it is their goal to reverse its effects. [3] [4] This agreement, rife with conflicting promises, began the idea of “artificial borders” in the Middle East, without any ethnic or sectarian characteristics, thus fragmenting the region in order to suit colonial greed and interests [5]. The colonization that takes place in the following years have lasting effects up until WWII, where many Arab countries begin to fight for their own independence as a reaction to world disgust over German conquest of its neighboring sovereign countries. It is during the Second World War that the United States begins to take a greater interest in the oil fields of the Middle East and most importantly in relation to the oil fields owned and operated by the California Arabian Standard Oil Company (CASOC) in Saudi Arabia. Soon, Saudi government officials begin to see increases US military operations as a threat to their sovereignty dampening US-Saudi relations as they took cues from neighboring Arab states that began working towards independence [6].

Figure 2: The Egyptian Army crosses the Suez Canal by pontoon bridge during the war of 1956.

It may be the Suez Crisis of 1956 that begins to warm Saudi-US relations again but it is the Egyptian UAR attack on Saudi Arabian backed Yemen forces in 1962 that creates the alliance that stands as it is today. The United States interest in the Middle East is based on an economic access to oil in turn keeping oil prices low and deterring Soviet expansion in the region. In a letter written to Egyptian leader Gamal Nasser, John F. Kennedy expresses his support for the Egyptians in the hope that spreading nationalist states, the region would be immune to Soviet communism. He goes on to expect that the Yemen conflict will end with a United Nation resolution and that he is counting on the Egyptians to assure this [7]. Soon, the Saudis begin building up on their border in anticipation of UAR expansion. UAF planes then bomb Saudi bases causing King Saud to appeal for US support. President Kennedy immediately sent war planes to the region as a deterrent. Kennedy is walking a tightrope of foreign policy in assuring allegiances to Israeli security but also seeing a potential in other Arab states as a deterrent of Soviet meddling. It is this era that advanced US weaponry is delivered to Israel and there is a trend towards momentous arms deals to several Arab states.

The arms race that ensues during the Cold War is a vigorous as is American paranoia over communism during the 1950’s. The United States and the Soviet Union are desperately competing for influence in the Middle East through guarantees of military aid and arms sales. Following the enactment of the Mutual Defense Assistance Act, the president was given broad authority in making agreements with allied countries and to provide them with a wide range of military goods and services. Between 1950 and 1967 the United States provided its allies with a total of $33.4 billion in arms under the Military Assistance Program plus another $3.3 billion in surplus weaponry under the Excess Defense Articles program [8]. On top of these assistance programs the US would export $11.3 billion worth of arms and equipment through its Foreign Military Sales program. As the Soviets began having success with establishing military links with countries like Egypt, Syria and Iraq, the US began supplying vast quantities of arms and ammunition to its own allies in the region and excluding any nations that did business with the Soviet Union. As an explosion of oil revenue in the region accelerated, so does the export to arms to states already positioned to be at odds with their neighbors.

Figure 3: An Iranian troop digs into a fox-hole wearing a gas mask. The United States arms deals to Iraq included Anthrax and other chemical weapons throughout the 1980’s.

On July 25th, 1969 Nixon would deliver the Nixon doctrine, a declaration that, according to Gregg Brazinsk , “the United States would assist in the defense and developments of allies and friends”, but would not “undertake all the defense of the free nations of the world.” [9] In summary, each ally nation was in charge of its own security and the United States would act as a nuclear umbrella when requested. Nixon basically meant that we would supply the arms and training to allies that would police the region themselves. The immediate effects of a “supplies not troops” policy is seen in arms sales to Saudi Arabia and Iran. In 1970, the total arms sales to Saudi Arabia was $30 million and Iran was $160 million  and by 1974 exports to the Saudis had jumped to $340 million and arms exported to Iran was $1 billion dollars. [10] With this doctrine, the US takes a less troop heavy roll in conflict and its actions become more diplomatic. The problem is that the localized disputes within the Gulf States see US supply as military backed support. It is the complete US support of Israel and Israeli provocation towards Arab states like Iran that causes such unrest and instability in the region.

As the Cold War evolves so does the US relations with Iran. Following the Iranian Revolution and the rise of Ayatollah Khomeini, Iran adopts an anti-Western approach to international relations. After the hostage crisis of 1979, where an Iranian revolutionary group took 52 American diplomats hostage, relations between the two countries has been frozen and sanctions put into place. In the following years, the US began supplying billions of dollars in weaponry, military intelligence and special ops training to countries surrounding Iran leading to the Iran-Iraq war. As Iran defended its borders and began pushing Iraqi forces back to Baghdad, Reagan became fearful of Iran taking over the oil fields of Kuwait and eventually Saudi Arabia. This lead to military aid and Iraqi purchase of billions of dollars in arms including chemical weapons that would be used on Iranian troops. At the same time the US begins selling much needed arms to Iran as all their arms supply had been inherited by the pro US shah and Iran was one of the largest importers of US arms prior to the revolution. [11] The Iran-Contra deals were meant to provide an access point for new relations between the US and Iran, but it backfired and exposed the continuing colonialist style influence and war mongering that the US was guilty of.

It is the false borders made in secret when the Sykes-Pinot Agreement was drafted that has created the tribalism and religious rivalries that we see today but it is the behavior of the United States that keeps the cycle of terror alive in the region. The United States economy is highly dependent on oil and as long as that continues, powerful American corporations, and in turn politicians,  will maintain an influence on Middle East affairs. After the United States military invaded Iraq in 2007, the Bush administration began to continue the pattern of supplying arms as military aid packages to oil-rich Persian Gulf states. This ended up being some of the largest sales of arms and ammunition to developing and developed nations ever. The sales to Saudi Arabia, which included a variety of sophisticated weaponry such as air-to-air missiles, and Joint Direct Munitions, which turn standard bombs into “smart” precision-guided bombs. [12] The ethical problem with selling arms to the Saudis is just peeking into their current war portfolio. Soon after the Obama administration sealed an arms deal of close to $ 60 billion in arms sales, the Saudi military began airstrikes in neighboring Yemen that have continued to today and many have called the situation in Yemen a humanitarian crisis. [13] As the Saudi Arabian government promotes Wahhabism, ultraconservative religious reform, the very nation that promotes democracy around the globe, the US, is heavily arming those that stand directly against it, all in the name of profit in arms sales. We are seeing the repercussions of decades of major arms races and military operations in the Middle East in the form of religious-extremist terrorism and we are making arms deals with the largest promoter of fundamentalism in the region. The world will pay the price that Republicans are cheering over all so private military arms companies can maximize profits sending bombs to a place that seems so far away but as terrorism is seen closer and closer to home we will have to take a closer look at the issues that should be so clear in our pursuit for peace by selling bombs to the world.

[1] P. Beaumont. The $18bn Arms Race Helping to Fuel Middle East Conflict. The Guardian. (2015, Apr 24). (accessed 2017, June 30)

[2] The $18bn Arms Race Helping to Fuel Middle East Conflict. The Guardian. (2015, Apr 24).

[3] Sir Edward Grey. The Sykes-Picot Agreement. (16 May 1916)  (accessed July 22, 2017).

[4] Mark Tran and Matthew Weaver. “Isis Announces Islamic caliphate in  area straddling Iraq and Syria. The Guardian. (June 30, 2014) (accessed July 22, 2017)

[5] Saad Eddin Ibrahim. Islam and Prospects For Democracy in the Middle East. Center for Strategic International Studies. 2002. (accessed July 21, 2017)

[6] Post-Colonial States and Struggle for Identity in Middle East Since World War II. Foreign Policy Institute. (2015, October 23). Retrieved from (accessed July 5th, 2017)

[7] Dan Elasky. The John F. Kennedy National Security Files, 1961-1963. The John F. Kennedy Library. Boston: 1979. (accessed July 14, 2017).

[8] Rashid Khalidi. Sowing Crisis: The Cold War and American Dominance in the Middle East. Chicago: Beacon Press. (2009). (accessed July 5th 2017)

[9] Gregg Brazinsky, Nation Building in South Korea: Koreans, Americans, and the Making of a Democracy. The University of North Carolina Press. Chapel Hill: 2009.

[10] Defense Program and Analysis Division. US Arms Control and Disarmament Agency. World Military Expenditures and Arms Transfers 1970-1979. Washington, D.C.: ACDA Publication 112, March 1982. (accessed July 14th, 2017).

[11] Noam Chomsky. Cold War II. ZNet. (2007, August 27). (accessed July 5th, 2017)

[12] Robin Wright. US Plans New Arms Sales to Gulf Allies. Washington Post. (2007, July 28). (accessed July 5th, 2017)

[13] Yemen conflict: A nation’s agony as cholera and hunger spread. BBC News. (July 27, 2017). (accessed July 28th, 2017).


Figure 1. Sykes-Pinot Agreement. 1917.

Figure 2. Egyptian Army crosses the Suez Canal. 1956.

Figure 3. Iranian Soldier wearing gas mask in foxhole. 1980.