Skip to main content Skip to navigation
Spring 2017 – Matthew Unangst History 105

Honor Killings in the Middle East

Download PDF

Honor Killings in the Middle East

In the modern day world women are seen fighting for their rights on all different levels across the globe.  In certain parts of the middle east men are ordering the deaths and personally murdering the women in their families as a punishment for actions they did that the men saw as not ‘honorary’ to the family name.  Qandeel Baloch was suffocated to death by her own brother for posting ‘playful’ photos of herself on the internet.  Girls and women may be stabbed, stoned, burned, suffocated, and beaten for something as simple as talking to a stranger without the permission of a man in her family.  Actions are being taken everywhere from the United Nations to Canada to stop these unruly acts of violence, however the problem can be backed by Sharia law.  In Sharia law, families may forgive someone for murdering a part of their family.  This is a modern issue that must be brought to an end.

Charleston Gazette, 5 August 2016

Geographic Focus: Pakistan, Afghanistan, India

Additional Search Terms: Qandeel Baloch, Sharia law, religion

Common Themes: A common theme that addresses this topic is inequality.  This is a given, for the women are being treated unequally to the depths of extremes by losing their own lives with no say.  Men have the power to take an innocent humans life because their own ego was hurt.  The common theme of conflict arises when looking at this issue from a modern human rights stance versus a centuries old religious tradition stance.  Society must agree where the line between church and government may be drawn.

QA: How much power will the government have when telling a religious group what is and is not lawful?

QB:  How devoted are woman to their own culture to stand up for their rights on this issue?



1a.) Benninger-Budel, Carin. Due diligence and its application to protect women from violence. Leiden: Brill, 2009.*&fctIncV=books&dstmp=1507579752966&vl(D462272217UI3)=all_items&rfnGrpCounter=1&frbg=&vl(462272216UI1)=any&vl(462272213UI6)=00&rfnIncGrp=1&scp.scps=scope%3A%28P%29%2Cscope%3A%28WSU%29%2Cscope%3A%28E-WSU%29%2Cscope%3A%28WSU_CDM1%29%2Cscope%3A%28WSU_CDM3%29%2Cscope%3A%28WSU_CDM4%29%2Cscope%3A%28WSU_CDM5%29%2Cscope%3A%28WSU_CDM6%29%2Cscope%3A%28WSU_CDM%29%2Cscope%3A%28WSU_CDM2%29&vl(1UIStartWith1)=contains&srt=rank&vl(462272209UI6)=00&vl(462272210UI6)=00&vl(473839574UI0)=any&vl(boolOperator0)=AND&vl(741170945UI4)=all_items&vl(462272214UI6)=Year&Submit=Search&vl(freeText2)=Paki*&vl(boolOperator2)=AND&vl(freeText0)=wom*&dum=true



1b.) McLeod, John. History of India. Westport, CT: Greenwood Publishing Group, 2002.



2a.) Matthew A. Goldstein. “The Biological Roots of Heat-of-Passion Crimes and Honor Killings.” Politics and the Life Sciences 21, no. 2 (2002): 28-37.

2b.) Ruggi, Suzanne. “Commodifying Honor in Female Sexuality: Honor Killings in Palestine.” Middle East Report, no. 206 (1998): 12-15. doi:10.2307/3012473.


Research Question: Will the fight for modern day equality rights overrule century old traditions such as Honor Killings?

How Climate Change Politics have changed over Time

Download PDF

How Climate Change Politics have changed over Time              by Joel Roeber

In August 2015, Rwandan scientists warned the United Nations Framework Convention on Climate Change (UNFCCC) that “[world] leaders must act now to prevent climate change impacts which could be “catastrophic” for human health”[1]. In order to enact upon this statement, they proposed a plan that would involve strengthening Nationally Appropriate Mitigation Actions taken by countries around the world to reduce greenhouse gas emissions. This plan aims to kick start action against Climate change by the 2020s[1] to get a head start on minimizing the impact humanity has on Earth’s climate before it is too late for anything to be done. The importance of Climate change as a major issue facing Mankind has spiked up significantly in recent years due to an increase in scientific data supporting its legitimacy and an increase in political and social awareness that has been directed towards it. But the new political platform it has created has undergone massive change in the years following its establishment into political ideology almost four decades ago. At its inception in the early 1980’s, the political viewpoint towards Climate change was that it could’ve been a potential consequence of nuclear war between the United States and the Soviet Union rather than a consequence of pollution and greenhouse gas emissions, whereas the political viewpoint towards it now is that Man has had a large impact on Earth’s Climate, as supported by scientific evidence.

The point in time that Climate change was seen as an underlying possibility occurred around 1983 in the United States[2], a time during which “science and technology were recognized as extremely important to society”[2]. The threat of nuclear war began to boil over as the Reagan administration found itself locked in a nuclear arms race with the Soviet Union. In what was known as the “War scare” of 1983, the Soviets used propaganda to create fear of United States intentions within their own populations[3]. From this, mass hysteria was created surrounding the threat of a nuclear winter, that could result from such an engagement. A little bit after this period is when scientific efforts expended towards researching climate change began to really take off, as organizations such as the National Research Council Staff (NRCS) began publishing their findings that were gathered from their experiments. In NRCS’ case, they stated within their report entitled Effects on the Atmosphere of a Major Nuclear Exchange that nuclear particulate matter possessed the ability to “cause severe drops in surface air temperatures” as well as have “other major climatic effects in areas that are far removed from target zones”[4]. This resulted in a shift towards viewing Climate change not just as the outcome of Nuclear Warfare but also Mankind’s own inefficient waste and pollution management systems came to be in the late 1980’s and early 1990’s. A meeting was

The developing trend of rising annual temperatures worldwide was well Figure 1: TIME Magazine cover, 1983    documented, and much evidence was collected to support the theory that greenhouse gases were the primary contributors to change in climate, which subsequently would have negative impacts on ecosystems all across the world[5]. During the 90’s a meeting was held between diplomats of the United Kingdom, Germany, and the United States to discuss a potential solution to the problem of Global Warming and Climate Change[6]. However, there was also the presence of political opposition that denounced the legitimacy of Climate change as well. Author Erik M. Conway stated in his work Atmospheric Science at NASA that NASA’s Earth Observing system (EOS) had its budget cut from $17 billion to $7 billion during the 90’s due to this opposition[7], and this perspective towards Climate change continued on throughout the decade into the new Milennia. Political cartoon by Herb Block


Political interest in Climate change started back up in the 2000’s, however, it had experienced great turmoil in trying to gain any ground amongst the populations of both industrialized and unindustrialized countries. According to Frederik von Paepcke, one of the primary difficulties that governments have had in trying to establish Climate laws is related to the fact that “most Political Leaders” are elected for “a few years only”, while it would take “decades” to determine whether certain Climate change policies were successful or not[8]. This was also further complicated by economic recessions during the late 2000’s that made budgeting for a sector of United States and European economies to be devoted to Climate research impossible. However, at the same time, countries in Asia such as China were continuing to make rapid industrialization efforts that were kick started in the late 1990’s, but without any regulation on their pollution output or a system of environmental standards being put in place. All of a sudden, a new threat to Climate change was seen by Western countries developing in China, and very quickly negotiations were established in order to Figure 2: Political cartoon, 1983   make sure that China and other Asian countries weren’t worsening the efforts to reduce greenhouse gas emissions[9].

Interestingly enough though, moving on into the 2010’s, China’s industrial capabilities have rendered it able to produce highly sophisticated solutions to Climate change. Dubbed the “green economy”, China’s government aims to establish an economic system that cooperates with environmentally-friendly technology going into the near future[10]. Should they succeed, Western governments will most likely adapt their economies similarly and follow suit. The measures that China has taken to accommodate their economy to reduce emissions shows that Climate change is not an unsolvable problem. This brightens the outlook for politicians towards the future regarding how Climate change can be dealt with.

Book Cover Image






                                                                                                                                                      Figure 3: Book Cover, 2011

Overall, as evidenced above, it can be understood how the relationship between Politics and Climate change has gotten drastically more serious in a mere span of over 30 years. The change needed and still needs to occur, however, in order for humanity to be able to progress forwards in numerous fields of technology, infrastructure, etc. and ensure that succeeding generations will possess the ability to live on as well. Going into the near future, the relationship shared between Earth’s Climate and various forms of politics will remain always changing and ever-complex, but it is arguably the most important issue facing mankind currently, and for most likely what will be a significant time to come.



[1]  “Experts call for action to prevent climate change impacts.” Proquest Newsstand. August 17, 2015. Accessed May 01, 2017.

[2] Badash, Lawrence. A Nuclear Winter’s Tale. MIT Press, 2009.

[3] Marder, Murray. “Agent: War Scare Gripped KGB: [HOME Edition].” Agent: War Scare Gripped KGB: [HOME Edition], August 08, 1986. Accessed May 3, 2017.

[4] Council Staff, National Research. “Effects on the Atmosphere of a Major Nuclear Exchange.” Proquest Newsstand. January 01, 1985. Accessed May 3, 2017.

[5] Fleming, James Rodger. Historical perspectives on climate change. New York: Oxford University Press, 1998.

[6] Cass, Loren R. “The Politics of Climate Change, The Origin and Development of Climate Policy in the United Kingdom, Germany, and the United States .” Proquest Newsstand. 2001. Accessed May 3, 2017.

[7] Conway, Erik M. Atmospheric science at NASA: a history. Baltimore, Md: Johns Hopkins University Press, 2008.

[8] Paepcke, Frederik Von. “Statehood in Times of Climate Change.” Proquest Newsstand. November 25, 2014. Accessed May 3, 2017.

[9] Asian Development Bank . “The Economics of Climate Change in Southeast Asia.” Proquest Newsstand. January 04, 2009. Accessed May 1, 2017.

[10] Liu, Manhong Mannie Ness, and David Huang. “The Green Economy and Its Implementation in China.” Proquest Newsstand. December 12, 2011. Accessed May 3, 2017.



Figure 1. TIME magazine cover, 1983,

Figure 2. Ronald Reagan political cartoon, 1983,

Figure 3. Book on China’s new “green” economic system, 2011,


(Final Project) Hikikomori: The Dark Shadow Cast by American-Led Globalization on Postwar Japan’s Male Youth

Download PDF

Hikikomori: The Dark Shadow Cast by American-Led Globalization on Postwar Japan’s Male Youth

On one Friday afternoon in Kyoto, Japan, a woman named Yoshimi Kawakami made her way to a doorstep matching one of the addresses on her highly particular to-do list. Despite not holding her breath for any kind of response, she nonetheless stands there for two hours, talking through the door, hoping that perhaps this time, she might be proven wrong. In a separate instance involving a different “client,” she would continue to be denied conversation, prompting her to remind him that it was snowing outside, adding that she might be forced to spend the night at his doorstep unless he agreed to talk to her. Such occurrences are not uncommon in Kawakami’s life—after all, it was an unavoidable part of her job as a “rental sister.” [1]

A rental sister (or brother in its less-common form) is a mental health professional tasked with making initial contact with individuals reported—usually by their parents—to be suffering from a condition known in Japan as “hikikomori.” [2] According to Japan’s Ministry of Health, Labor and Welfare, three essential conditions need to be met before someone may be considered hikikomori. First, the individual in question is not currently engaged in education, training, or employment. Second, the person is socially withdrawn, spending nearly all of his/her time at home—especially within the confinements of own rooms; they have no close ties with anyone outside their immediate family members, and show a gravely concerning degree of anti-social behavior. Third, symptoms must last for at least six months, with no apparent pathological cause. [3]

What makes the case of hikikomori particularly interesting is how pervasive it is within a limited set of coordinates: geographical location, time period, and the affected demographic. Though hikikomori behavior has been observed in various locations around the world, only in Japan is it considered an alarmingly problematic societal phenomenon. As of 2006, it has been estimated by leading psychiatrists that more than 1 million confirmed cases of hikikomori exist in Japan. [4] In fact, the American Psychiatric Association’s DSM-V makes no mention of the matter altogether, implying that it is merely a culture-bound syndrome. Even in Japan, a great deal of debate still exists as to whether hikikomori should be regarded as a mental health condition or as a social problem. [5]

This disagreement among public officials, sociologists, and health professionals regarding how the subject should be approached seems to stem at least partly from the fact that the issue is relatively recent. One of the earliest professionals to ever encounter the condition was Dr. Saito Tamaki, psychologist and medical director at Tokyo’s Sofukai Sasaki Hospital. “I didn’t have a name for it,” he recounted as he explained how he began to notice one patient after another in his office in the mid 1980s, all of whom reported shutting themselves in their own rooms because, for one reason or another, participating in society seemed no longer possible for them. [6]

Another predominant pattern Dr. Tamaki noticed was that most of such patients were male, either in their teenage years or early twenties. In 1998, Dr. Tamaki would publish his first study on the subject, “Hikikomori: Adolescence Without End.” [7] Two years later, the issue would make its debut on Japanese media, after a series on sensational incidents involving hikikomori; it was only then that the topic of hikikomori finally became a matter of public interest, albeit a highly stigmatized one. This paper aims to provide a historical account on the possible causes of the issue, weaving together interconnected psychological, social, economic, and cultural elements into a tightly-constructed narrative. In doing so, this paper hopes to show the keys to any civilization lie in its history.

Taking all the above into account, this paper puts forth the following argument: Centrally “hot,” peripherally “cool,” overall lopsided market reactions [7] to shifting circumstances within the international landscape, which had been set into motion first by the end of World War II in 1945—which enabled the postwar recovery process to take place—and then by the advent of the second wave of globalization in the early 1970s, bringing along with it dramatic changes that largely altered the long-term trajectory of capitalist economies across the globe, and Japan was no exception. [8]

It was precisely this kind of response from the Japan’s political and financial powers that be—and in which they essentially sought to preserve long-standing, extremely bureaucratic institutions and protect the core of the status quo, even at the cost of dimming the future prospects of younger generations who were yet to enter the workforce—that had ultimately left the latter in a paradoxical situation in which they were forced to live up to societal expectations of postwar prosperity and collective participation in an economy that no longer presented adequate opportunities for doing so, resulting in the economic marginalization of many young people who were left virtually defenseless against a tsunami of cultural stigmatization that ruthlessly confined them outside the bounds of full-time employment. In the case of hikikomori, this particular context led them, voluntarily or otherwise, to adapt via embracing the avoidance-based approach promoted in their native culture, so as not to lose any more face. [9]

Let us begin by discussing the socioeconomic climate in Japan shortly after the Second World War in 1945. Needless to say, the first order of business was to redirect resources from toward economic recovery and social prosperity. As a direct result of the events of the WWII (1939-1945), the Korean War (1950—3), long-winded American occupation (ended in 1952), Japan would do this under the umbrella of American protection as part of the Western bloc in the Cold War that would soon ensue. Throughout this relatively brief span of time, sharp political and economic disagreements would emerge as Japan’s leaders in both public and private sectors differed in determining the postwar vision the country should pursue. [10]

Consequently, between 1955 through 1960, large-scale partisan alliances and opposition would form. On one side, the socialist parties merged into a single political entity. In response, the Liberal and Democratic parties would also unify against their new common enemy, establishing The Liberal Democratic Party (LDP)—and in doing so, managed to get the majority. Much like to two—party system in the U.S., this “one-and-a-half” party system would come to preside over Japanese politics till the 1990s, hence being chiefly responsible for enacting into legislation the kind of socioeconomic policies that largely facilitated the present situation.

In all fairness, however, their job was not easy. (And, perhaps to their credit, they did pass a number of democratic reforms in the recovery period that represented a divorce from legacy in favor of the future, including an anti-monopoly law enacted in 1947 and the Trade Union Law of 1945.) For one, virtually every reformation bill they proposed was met with sharp opposition from the socialist party as well as the labor unions who vehemently opposed the renewal of the ANPO treaty, seeing it as a symbol of a westernization (not to be confused with modernization) process they were not quite sold on as yet, among other reasons. An emblematic event for this would perhaps be the labor strike at the Miike coalmines in 1960. (This continual dissatisfaction with the political establishment soon led to the removal of the LDP’s leadership. As a silver lining, they did shortly thereafter refine their approach towards labor rights and passed a remarkably successful “income-doubling” plan.) Ironically, the increasing amount of political influence granted to unions during this brief time period (despite the greatly influential Dodge Line legislation), which aided them in having major corporations embrace the lifetime employment system, a victory to be relished by the middle class at the time, was also—albeit unintentionally—a key factor in undermining the opportunities available for younger workers decades later. [11]

Moreover, the aftermath of WWII left the country with an acute shortage of resources and a resulting hyperinflation in the economy. To combat this, the Japanese government found it necessary in the years 1945—9 to preserve many of its prewar and wartime legacies so as to maintain stability—and they did so using the program mentioned below as a viable conduit. Additionally, they had to embrace plans that some might deem as necessary evils, most exemplified perhaps by their adoption of the popular yet somewhat controversial “priority production program,” which had a profound impact on the postwar economy. Overseen by a cadre of esteemed economists, the program focused its efforts on prioritizing production above all else, including consumption and long-term sustainability, most notably in the industries of coal, iron, steel, and fertilizers. In addition, it sought to undermine the concept of social class in favor of nation-wide unity in a time of crisis.

Naturally, the coexistence of both reforming and conservative trends with ardent supporters on each side of the policy isle sparked heated continuity vs. discontinuity debates in the public sphere. However, considering that the war had left in its wake the loss of Japan’s colonies—and therefore their access to cheap labor and natural resources; the destruction of much of its infrastructure; an economy that in 1946 could only produce 20 percent of the production level in 1937 in the face of rapidly increasing demand from postwar population booms and the return of millions of Japanese military officers from overseas; not to mention the resulting hyperinflation in which the wholesale price index for Japan’s major industries in 1948 was about 105 times greater than that of 1937, it would not be unreasonable to assume that the government did not have much of a choice.[12]

Having stated all of that, the project was far from a mere temporary economic solution and spelled far greater implications, most important of which perhaps is that as result, the political establishment had greater say in guiding fiscal policy than they ever had during the war. Also, as stated above, a good number of institutions maintained during the war were hence protected and preserved via their incorporation into the program. Another important result was the establishment of the Economic Satiability Board (ESB), which only grew stronger beginning from mid-1947 onward. [13] (It may be worthy of mentioning that the one of the reasons they were actually able to attain such of degree of influence was by politically eliminating powerful businessmen on grounds of the connection they had with the military during the war.)

At any rate, the government used its newfound power to retain two major practices that were heavily used in the wartime period. One of those pertained to using “annual resource mobilization plans” to surgically allocate scarce resources to various divisions. For the reasons stated above, this decision seemed imperative even after the war. Secondly, tight government regulations over the flow of resources after devising the aforementioned plan was also a matter of critical importance. Even into the 1950s and beyond, it would appear that the latter practice came to acquire an influence of its own, continuing to reflect itself in expansionary industrial policy on the behalf of the Ministry of International Trade and Industry (MITI), and its espoused philosophies were largely accepted by the Ministry and Finance and Bank of Japan during the explosive growth period soon to follow. This was key in fostering strong connections between corporations and banks, permitting vast sums of capital to flow easily between the two as the economy grew exceedingly in magnitude and became extremely competitive.

It is next to impossible to overstate the impact of the previous statements, especially not when a sizable chunk of Japanese economic specialists assert that even fifty years later, even today, much of the ideological and institutional mechanisms by which the Japanese economic system functions is derived from this “1940 system,” which was all but obscure before the wartime period. Even more important, this system is what laid the groundwork not only for the economic boom we outline below, but for the economic regression immediately following it—and the repercussions of both. [14]

Finally, before we move on to the “high growth era” of the Japanese economy, one last observation needs to be made as to why such fiscal policy was put into place from a more global perspective, one that transcends the concomitant assumptions of interpreting this course of action purely in terms of the war and its aftermath—though it may be very important and relevant to the matter at hand, the perspective it offers, however rich, is still rather myopic, as it does not take into account a handful of other important factors that are essential for providing the bigger picture. Indeed, for if we were to view it from an alternative lens, we would notice that some of the aforementioned policies had been discussed prior to the Second World War, or even the invasion of China in 1937. Ergo, and as we have previously stated, this recovery period had to also be seen in terms of a long-term evolutionary economic process during which the nation responded in its own way to the two waves of globalization. From this point of view, we notice that these quasi “big government” strategies were also devised in order to supply a social safety net to curb the effect of “negative market forces” witnessed mainly by first-world, industrialized nations at the time as a result of having to compete on a global stage.

But in addition to competitive necessity, there was also an ideological element, as it has to be mentioned that a prevailing political science theory in the U.S., the leader of the Western bloc of which Japan was a key member, was that countries would linearly evolve as they strove to emulate US-led values of democracy and capitalism, not taking into account the many other aspects critical to the progression of different nation. Though there is no explicit mention of this in either the peace treaties of 1951 or 1960, it may be very well be taken for granted as a tacit assumption on part of Western ideology. In name, we may call it the “Anglo-Saxon model of liberal capitalism.”

Moreover, from this we can also see a plausible explanation as to why governmental intervention does not necessarily entail tight regulation. If preserving certain prewar and postwar policies may be seen as the continuity, then there had also been other facets representing discontinuity. That is because, whether it liked it or not, the country had to respond to external events with far-reaching consequences, beginning with the first wave of globalization falling from grace with the dethronement of the gold standard in 1914, well before the Great Depression and WWII placed the last few nails in its coffin. Also, the country had to adapt itself the new economic order rising from the ashes of the war, namely the U.S. led form of international trade. [15]

We now move on to the second, loosely defined period of socioeconomic importance, and by that we mean the period of economic boom and rise in affluence that took place roughly between the 1950s through the 1980s. Mapping it out statistically, the annual growth rate of the economy in the years 1956—73 was a whopping 9.3 percent. Furthermore, Japan’s GDP went from $3,500 to $13,500 per capita (inflation not accounted for). In the years between 1975 and 1991, the average growth rate of the economy was still an impressive 4.1 percent. This time period highlights more than anything else the influence of the economic policies adopted by the Japanese government and the effects it had in shaping the long-term development of the Japan’s economic system and what set it apart from liberal capitalism mainly practiced in Western economies—and by that we mean a “distinctive set of economic philosophies and ideologies that constituted the foundation of Japanese developmentalism.” More specifically, this entailed the following:

“[Japanese developmentalism] was sustained by nationalism and held a strategic view of the economy. It had a strong orientation toward promotion of exports, which became the backbone of the East Asian model of economic development. Studies on the characteristics of Japanese economic institutions indicate that the Japanese economy was governed by various non-market mechanisms, which clearly distinguished it from liberal market capitalist economies…Major manufacturers relied heavily upon subcontractors and also forged long-term relationships with each other. Cartels were used an important means to protect sunset industries and weather downturn in business cycles. Banks not only practiced cross-shareholding with corporations, but were also the major source of industrial capital for Japanese corporations.” [16]

This period of economic boom reached its peak in the early 1970s and the lasted through the mid-1980s, coming to be known by some as a “new stage of affluence and tighter centralized control.” [17] This staggering rate of growth sparked a great deal of scholarly interest, particularly in the 1980s. It is of great importance to state here that there was much nuance regarding the forces the shaped the Japanese economic system during this time period and the one prior—which is intricately linked to it; there is no straightforward answer concerning whether the direction it took should be attributed to the government or to the private sector.

Though unlike the U.S. government, which primarily concerned itself with setting the rules and staying away from the game itself, the Japanese government took great interest in the structure of the economy and leveraged its industrial policies to exercise control over its trajectory. However, the degree of subtle influence the private sector had over the government it is still unclear, which makes some analysts contend that the primary reason behind the boom should be credited to the ambitions of the private sector and the unique characteristics of the Japanese management system, which was greatly influenced by Japanese culture and values inherited from the Shinto religion (cultural grounds) and the practice of delegated monitoring (economic grounds). Still, from the policies we previously delineated it cannot be denied that the Japanese government did not leave its economy at the mercy of the whims of the market, and hence was a significant contributing factor to the present economic situation in Japan.

New research on the high growth period emerged later on in 1990s, during the “globalization debate,” offering new insights by exploring it not only from a domestic standpoint, but an international one as well. As it happens, international factors also played a key role in spurring it, highlighting how Japan profited greatly from the Bretton Woods system and the General Agreement on Trade and Tariffs (GATT). The resulting dynamic from this economic system and Japanese economic policy was a fixed exchange rate in the government was able to simultaneously implement expansionary policy and stimulating growth while simultaneously incorporating safeguards from inflation by adjusting capital flow. Furthermore, the US-Japan semi-one-sided relationship in trade (including other allies) was such that Japan could flood international markets with exports while encapsulating its domestic markets, which enabled the private sector in Japan to be seen as source for providing the citizenry with social security.

This scholarly investigation also yielded another interesting insight, namely that the upshot of Japan’s economic strategies, the impressive rate of growth in particular, was more of an unintended byproduct rather than a deliberately planned outcome. This may be exemplified by the fact that the highly competitive atmosphere ignited in the economy triggered high savings and high investment rates as a side effect—whereas the chief goal of Japanese government in conducting such policy was to ensure that there were safeguards if when things got out of hand. [18]

The 1990s and early 2000s marked a downward spiral in the Japanese economy. This was caused on the one hand by two main domestic factors: the rise and inevitable burst of the Japanese stock market and real estate bubble in the second half of the 1980s, as well as the radically reformist policies pursued by the government in the late 1990s. One the international side, drastic changes in the global economy with the advent of the second wave of globalism in the early 1970s, accompanied with the collapse of the Bretton Woods system, causing Japan to lose a precious advantage, since the international market since then replaced fixed exchange rates with floating exchange rates and began to further (neo)liberalize financial practices.

We begin with the real estate and stock market bubble. In 1985, the same year in which the Plazza Accord was introduced, the Nikkei index was 12,775. That number was doubled only two years later. In the former year, the daily average of shared traded was 414 million. By the latter, it was 946 million. By 1988, the market cap on Japanese stocks was 346 trillion yen, 30 percent greater than that of the U.S. stock market, making it also the largest stock market in the world. The same goes for the Japanese real estate market. By 1998, the total value of land in Japan was 1,673 trillion yen, almost three times as much that of the U.S. In the years comprising the 1980s, the total amount of loans granted by banks to all industries surged by 120 percent, while the loans given to the real estate industry increased by 300 percent—not to mention that the majority of loans to non-banking industries were invested into real estate speculation.

If there has so far been an absolute rule in history, perhaps it would be that after the rise, comes the inevitable fall. In the case of the Japanese economy, this meant a decade-long stagnation after the bubble had burst. Between 1989 and 1992, Japan had already lost 800 trillion yen in those two markets, and in turn capital losses from that reached 1,330 trillion yen. By 2003, land prices fell by a staggering 55 percent. Between 1997 and 1998, the Japanese economy found itself ensnared in a liquidity trap. Further more, annual bankruptcies rose from 14,000 by the end of 1995 to 19,171 in 1998.

Multiple explanations had been provided so as to explain this unexpected fall, including an aging financial infrastructure, the very structural nature of economic cycles and bubbles, the end of the one-and-a-half party system and following regime change, and technological innovation cycles. To be sure, those were indeed important contributors that expedited the process, but as we had previously stated, they were not the chief causes. [19]

By now we have went through Japanese socioeconomic history since the end of WWII and into the early 2000s. Doing so was important because this story highlights a struggle between reform and recovery, featuring an ongoing continuity vs. discontinuity debate in the midst of conflicting pressures stemming from different sources—domestic circumstances, international shifts, etc. We present and highlight this conflict because, as we shall explain below, it largely contains the roots of the contemporary hikikomori issue. Before we get to that however, there remains one final event we need to discuss, as it provides us the vital link from this lengthy past narrative to the present situation—namely the governments attempts at reforming policies in the latter part of the 1990s.

In 1996, the Japanese government put into legislation deflationary fiscal policies in addition to the “Big Bang” banking policy. As far as the macroeconomics went, total central and local budget deficits were restricted to 3 percent. They also lowered then national debt by 4.3 trillion yen, and increased consumption tax from 3 to 4 percent. The “Big Bang” demanded that banks follow certain procedures when their self-capital was low. [20]

These numbers, however, do not reflect what took place on the micro- and individual-levels as a result of this ongoing dilemma between restructuring the economy to adapt to globalization versus focusing on recovery. While the former required moving towards a neoliberal economy or liberal market economy (LME) for the reasons we mentioned above, the latter on the other hand required building on a coordinated market economy (CME). Because reforming deflationary policies increased uncertainty in Japanese markets, investments tended to nosedive. As we all know, there could be no economic growth or recovery without investments. It would also be important to state here that the uncertainty went beyond economic factors. For one, a good portion of the Japanese public still found neoliberal ideology distasteful, and the more the market went in that direction, the less likely they were to spend more money. For another, the Japanese financial district did not take too kindly to disruption. Regardless, this continuity/discontinuity trend did not bode well for younger generations.

Let us get into more detail as to how that happened, how these large-scale battles had an impact on Japanese youth. The forces of globalization forced the Japanese economy to adapt itself to Western models, which carried along with their market labor systems the values of individualism and competitiveness, and light regulation. However, the institutional forces in Japan were highly reluctant to reform the core of the permanent employment system—which represented a core cultural symbol in Japan. Therefore, their way of adjusting to pressures of globalization and post-industrialization has been to expand periphery employment so as to cut labor costs and stay competitive while simultaneously preserving the financial status of the financial elites and those who were already in employment.

This, however, also meant that younger generations would largely be denied entry into that core, because there was no place for them. In this context, it can be said that Japan’s response to globalization was more of an asymmetric structural “reregulation” than deregulation: the core of the employment system remained the same—long-term employment, seniority-based status, etc.—while the limbs were altered in the form of expanding part-time contractors to reduce wage and benefit costs. As markets continued to cut off opportunities for stable, full-time employment, Japanese youth still in education were left with the stark realization that failing to secure full-time employment immediately after graduating could very possibly mean being permanently blackballed from it.

This “hot” reaction to globalization on part of the Japanese business elites left Japanese youth during the 1990s and 2000s in the unenviable position of having to conform to mainstream values—for Japan is particularly hostile to acknowledging success in any alternative form compared to Western countries—in an economy that no longer offered such opportunities. Despite the economic changes, dominant cultural attitudes and expectations remained largely the same: graduate from high school, enter a prestigious university, secure a full-time job at a large corporation, and gradually climb up the corporate ladder in that corporation over a long period of time. As one gained promotions over the years, access to seniority-level wages would permit starting a family. Economically and culturally, this was only viable method for thriving in the Japanese economy. [21]

For these reasons, pressures were extremely high for young people to do well in high school so that they could enter a prestigious university—the biggest single basis for being hired by a respectable company. This pressure was most keenly felt by young males, because failure to secure full-time employment status would render them as failures on cultural grounds. Women, on the other hand, were generally expected to work part-time to add supplementary income for the family, so the pressures were not as high. That said, they were still required to get good grades and graduate from a respectable university to do so. However, because full-time opportunities were limited, Japanese youth were forced to find some way to cope with the situation.

In speaking of this, we must keep in mind that Japan is an exceptionally conformist society, with strictly defined expectations regarding how the course of an individual’s life should unfold. Though it would be a mistake to impose a monolithic, uniform model on entire civilization, research shows that virtually around 90% of Japanese people may be placed within the conformist model, more or less. Therefore, despite the fact that economic factors significantly decreased the rewards for conformity, it socially remained the basis for measuring success. Faced with this, Japanese youth fell into two coping categories: ritualists and retreatists. The exception here, of course, are those who were fortunate enough to find full-time opportunities, for they altogether managed to altogether escape the vicious cycle—hence there being no need to mention them.

Young people adhering to the ritualists category basically continued to follow the predominant model in spite of receiving fewer economic rewards from the process—rewards that for long had been taken for granted if one successfully adhered to the norm and did not go against the grain, which, again, was highly frowned upon. Though the increasing economic prosperity from the past periods allowed greater access to high education—more than 50 percent of high school students would now go on to graduate from 4-year universities, an entire fifth of this well-educated group would be forced to settle to low-paying, part-time jobs. As a matter of fact, during the 1990s, a whopping 25% of college-educated young people between the ages 25 and 34 could not find any full-time jobs.

The inability to secure full-time employment denied one access to a number of essential privileges—a decent and stable income; important professional skills (much of this training process in Japan happens only after one acquires a full-time position); complete social security benefit packages; the social status of being considered a true member of society, denied to part-time contractors; and the ability to build a family. It would be difficult to understate the amount of anxiety and distress that could come as a result of this—or even conceiving it.

To further compound this, opportunities for innovation continued to be blocked by the institutional powers that be. To put it simply, “innovation” in an adaptive sense—entrepreneurship, independent content creation, etc.—was simply a virtual impossibility in Japan. Because of this, many young people were forced to work within the confines of the system despite decreasing rewards. Moreover, once one was trapped in marginal employment, moving to the rigid core proved extremely difficult. Surprisingly, a largely portion of the disenfranchised Japanese youth did not protest to this disparity between the need for accomplishing dominant cultural goals despite the clear lack for legitimate means for doing so. However, even if innovation were possible, mind you, we must remember that none of the Japanese youths had been adequately trained to thrive in such a context, which only exists in theory as far as Japan is concerned.

There was, however, another kind of response—the avoidance-based, retreatist response, in which some youth, disillusioned by these dimming prospects created by the aforementioned duality and feeling compelled by choice or necessity to opt out of the system altogether. This is the category that came to largely be known as hikikomori. Since the early 2000s, this phenomenon has received a lot of attention as a dramatic social problem in the public sphere due to its severity and potential future implications. By 2003, up to 10% of Japanese youth between the ages of 20 and 24 were out of work.

In many cases, it was found that many of these youths either simply stopped looking for a job after numerous futile attempts or quite going to school/college at some point because the pressure was too much to bear. In atmosphere where getting top grades and doing exceptionally well in high school and college entrance exams that emphasized a single correct answer for every question was the first and foremost priority, this is not particularly surprising.

There are a number of sociocultural “enablers” that allowed this to happen. Firstly, Japanese parents in particular are extremely unlikely to exercise coercion in the sense of forcing their children to return to school or keep searching for work. Additionally, they consider themselves responsible for not only taking care of their children, ensuring that they have enough food and money to survive despite opting out of society, but because they also see this as a failure on their part (not to mention the overwhelming sense of guilt, shame and stigmatization this brings due to cultural factors), they tend to accept the situation and are very reluctant to seek professional help. Though hikikomori may be fortunate enough to have their families support them in the present, the problem comes when their families are no longer able to do so, due to death or otherwise. When that happens, their fate of those “missing million” is all but uncertain.

Having said so, solving this problem is anything but simple. We already discussed the economic factors that contributed to creating an extremely rigid labor market such that, even if the hikikomori were to leave their rooms and begin job searching, there is no guarantee that they would end up getting hired. Not only that, but the odds are very much against them—the highly institutional Japanese economy is not very open to providing second chances, and a hikikomori history is a big turnoff for many employers. [30]

But it is not only the external factors that must be taken into consideration, as the internal ones (namely the mental aspect) play a significant role as well. To highlight this, we must try to understand why those youths became hikikomori in the first place from a psychological and motivational standpoint. We do so by looking into models depicting motivational and behavioral patterns. Research suggest that Japanese society adheres to the model of an “interdependent cultural system” in which individual attention is mostly focused on individual shortcomings, whereas the U.S. can be categorized as an “independent cultural system” in which attention is mostly focused on one’s perceived strengths and what makes them unique. What this means is that Americans would double their efforts in response to successes rather than failures, whereas in Japan it would go the other way around.

In a series of studies conducted in Japanese universities, it was found that low-risk groups (i.e. those who were likelier to succeed in the Japanese-based motivational system) did indeed work harder in response to perceived failures to comply by the mainstream and scored low on independence-based feedback. By contrast, high-risk groups were found to adhere more to the independent motivational pattern, receiving lower scores on interdependence-based feedback and showing a greater likelihood to become “cultural dropouts.” Here is the kicker, though: while likely retreatist groups showed low scores in the interdependence-based motivational pattern, this did not necessarily translate to doing well in its counterpart. In other words, perceived failure to comply to the norm proves sufficiently discouraging such that the mostly likely outcome for such Japanese students was to give up on the system. And because of that, they remain in hiding.

What this shows is that retreatists’ behavior can be explained by the overall risk-averse motivational tendency in Japan, and this reflects in the strategies required for success in the context of different cultures. In the U.S. for example, having high self esteem is considered essential for success, because the benefits it offers outweigh the costs of standing out and disrupting social harmony. Meanwhile in Japan, social harmony is more important to be both functional and successful, and hence people tend to avoid standing out at any cost so as to not rock the boat. Similarly, the retreatists pattern shows that instead of trying one’s odds for the potential benefit of succeeding via institutional means, this behavior would translate to avoiding failure, because the cost-benefit analysis goes the other way around in this particular cultural context.[22]

On that note, this paper arrives at its conclusion. As we have seen, the hikikomori situation emerged from a paradoxical goals-means duality in which “hot” market reactions to globalization in the face of unchanging social expectations affected the ability of a significant subset of Japanese youths in successfully integrating into the labor market. We first analyzed the economic aspect and showed how the present economic situation came to be from the end of the Second World War and onto the present day.

The economic portion was divided into three timelines: the postwar recovery period, the boom period, and the subsequent crash. The first period provided the context for the existence of modern Japanese economic institutions. The second demonstrated why these institutions were reinforced, justified by the boom and empowered by the Bretton Woods system and the GATT. Lastly, third period demonstrated how the changes accompanying the second wave of globalization turned the tables against the favor of the Japanese economy, and in turn how Japanese youths in particular were most vulnerable and thus victimized by it.

By covering this, we delineated how the hikikomori situation emerged from an economic point of view and then proceeded to highlight the sociocultural elements that perpetuate it. In doing so, this paper hopes to have successfully combined and the economic and cultural aspects of the problem into a seamless blend that covers all important aspects of the problem and consequently proving its main argument. We believe this issue is of great importance to understanding the function of history as it goes to show how large-scale changes can have drastic, far-reaching, and unintended effects that show just how greatly our lives can be affected by global forces.


[1] Maggie Jones, “Shutting Themselves In,” New York Times, January 15, 2006, (Accessed March 18, 2017).

[2] New York Times, January 15, 2006.

[3] Kaitlin Stainbrook, “All About Hikikomori: Japan’s Missing Million,” Tofugu, June 18, 2014, (Accessed March 22, 2017).

[4] New York Times, January 15, 2006.

[5] Tofugu, June 18, 2014.

[6] New York Times, January 15, 2006.

[7] Tofugu, June 18, 2014.

[8] Tuuka Toivonen, Vinai Norasakkunkit, and Yukiko Uchida, “Unable to Conform, Unwilling to Rebel? Youth, Culture, and Motivation in Globalizing Japan,” Frontiers in Psychology 2-207 (2011):  1-3, accessed March 28th, 2017, doi: 10.3389/fpsyg.2011.00207.

[9] Toivonen et al., “Unable to Conform, Unwilling to Rebel?,” 3-5.

[10] Wesley Sasaki-Uemura, “Postwar Society and Culture,” in A Companion to Japanese History, ed. William M. Tatsui (Massashusetts: Blackwell Publishing Ltd., 2007), 316.

[11] Sasaki-uemura, “Postwar Society and Culture,” 316-319.

[12] Bai Gao, “The Postwar Japanese Economy,” in A Companion to Japanese History, ed. William M. Tatsui (Massashusetts: Blackwell Publishing Ltd., 2007), 300-301.

[13] Gao, “The Postwar Japanese Economy,” 302.

[14] Gao, “The Postwar Japanese Economy,” 304.

[15] Gao, “The Postwar Japanese Economy,” 304.

[16] Gao, “The Postwar Japanese Economy,” 304.

[17] Sasaki-uemura, “Postwar Society and Culture,” 316.

[18] Gao, “The Postwar Japanese Economy,” 304-305.

[19] Gao, “The Postwar Japanese Economy,” 305-308.

[20] Gao, “The Postwar Japanese Economy,” 308-311.

[21] Toivonen et al., “Unable to Conform, Unwilling to Rebel?,” 2-3.

[22] Toivonen et al., “Unable to Conform, Unwilling to Rebel?,” 3-8.



Western Foreign Policy and it’s Effects on the Southeast Asian Drug Trade

Download PDF

Across the United States, there is a silent killer working it’s way into the homes and lives of everyday individuals. Synthetic opioids like the painkiller fentanyl (which can be as much as 50 times more potent than heroin) have caused a public health crisis in the United States, and many US government officials are quick to point the finger at countries such as China. At the start of 2017, the head of the DEA travelled to Shanghai (for the first time in 12 years) to meet with Chinese government officials and to build better ties between the two nations[1]. But how did a part of the world that has a considerable number of nations that enforce the death penalty for drug possession become a major source of the most deadly narcotics sold on the black market? How and why did China, a country that was coerced into widespread drug usage during the Opium Wars, reach such strict drug policies? The modern demand set by Western nations such as the UK and US have created widespread changes in drug trade for other nations. But for a greater understanding of this contemporary paradigm, analysis of the effects that the Western world has had upon the drug policy of other nations must be understood. At what point in time did Western colonization of Asian countries irreversibly set the tone for future drug policy in these nations? What major events occurred to cause the governmental policy that we now see? In particular, we will closely examine the influence that Western powers had after World War 2 on the modern narcotics climate in this part of the world. The rush to set up hastily designed governments after such a global conflict was seen all around the world and had a great number of effects in many different nations, but the examples that we will see in Southeast Asian are very pronounced when placed within the context of narco-military organizations.

Western imperialism and colonialism after the second World War created a perfect storm for the birth of militant narcotics organizations in the Southeast Corner of Asia. The interwoven nature of these organizations spanned across the borders of several different nations, and their influence could be seen in some ways be seen thousands of miles away in some of the most politically and economically advanced nations in the world. In order to fully understand the full repercussions that the time period following WW2 had on the narco groups in this time period, we must first set the stage of events in SE Asian that lead the region to where it was in the late 1940s and early 1950s. Prior to the start of the 20th century, population saturation in South China led to the mass migration of a number of individuals who sought opportunity; these same individuals carried with them the Chinese habit of smoking opium. For much of the first 40 or so years of the 1900s, the drug use was seen as a means to ensure that at least some of the populace was dependent on state provided drugs, ensuring that government also had a stable labor force and a continuous source of income. Though drug use was rather common in many parts of SE Asia during this time, opium trafficking was still not as big of a business as it was in other parts of the world. This changed during the second World War however. A sharp increase in opium product during WW2 can be noted by analyzing the region of Indochina, that went from 7.5 tons of opium produced in 1940 to 60.6 tons produced in 1944 [3]. This suggests that the Chinese government used the revenue from opium sales as a source of income for its defensive campaigns against Allied forces in the region.

An excellent example of the effects that imperialism has had on countries in Asia after World War Two can be seen in a report on the “Golden Triangle”, (a narcotics conglomerate that centered where the borders of Laos, Thailand, and Myanmar come together) that was published September 12th 1978 in The Globe and Mail. This newspaper article was published by Reuters following the arrest of 40 different individuals across several different nations and also resulted in the confiscation of $4.5 million dollars worth of heroin. It is suggested by the author of the article that this international crime syndicate was almost wholly responsible for the majority of the product found in the European heroin market (the author assumes that no other major Asian drug organizations supply heroin to Europe) [3]. If this assumption by the author is correct, that one can seen that one major influence that Western society has had on the drug trade in this part of the world is as simple as the relationship between supply and demand. However, this capitalistic viewpoint can only begin to scratch the surface of how Western society influenced the drug production in SE Asia. In 1948, the Burmese gained independence from the British and hastily attempted to set up a government that was destined to collapse at the slightest sign of mutiny.

Figure 1: An image of the Golden Triangle Region.

A dizzying array of political, economical, and militant events had to take place over several decades to allow the flourishment of drug organizations that were nothing more than warlords who lead defected revolutionary armies. One of the other critical early events regarding the origins of the Golden Triangle can be traced to January 1950 when the communist civil war in China was just wrapping up. Mao Zedong’s communist armies were quickly approaching one of the last strongholds of the Republic of China. Chiang Kai-shek’s (leader of the People’s Republic of China until 1975) forces began to flee the city where they were holed up, Yunnan. The leader of the ROC forces in Yunnan was unable to stop a detachment of 1500 of his soldiers from fleeing from the advancing communists; these soldiers escaped into neighboring Burma. This group would eventually begin to turn into an organization called the Kuomintang, or KMT for short. This group carved paths of destruction through much of Burma, and acted as mercenaries for hire for much of the 70s and 80s. During this time, the KMT received support from a number of different organizations, one of which included the CIA. American policy at this point in time was to support any organization that could possibly help halt the advancement of communism into parts of the world that were still developing fully. These actions by the US government hurt future diplomatic ties between with the Burmese government for many years to come, and may have impeded future American attempts to try to halt the production of heroin from opium in the northern parts of Burma.

Figure 2: Burmese government soldiers burning mounds of confiscated heroin.

The onset of the Korean War in 1950 brought about a new policy in Washington, that supported the development of a close relationship with the Burmese government. This was in part seen as a way to deter the advancement of Chinese communists into the region. Despite the best diplomatic efforts on behalf of the US, Washington eventually decided to cut its losses and secede influence over the Burmese back to the British. A failed assault on the Chinese city of Yunnan by the general Li Mi had dissuaded the US from further immediate attempts to stabilize the region. The CIA, which had up until this point directly supported Yi Mi, began to slowly defund his army both monetarily and logistically in November of 1951. This was a decision that would have massive repercussions for formation of a number of different army-states that were all derived from the massive battalions that Yi Mi had at his command [5]. This meant that a large number of highly trained and well-armed soldiers were disbanded with no sense of general command. Career soldiers who suddenly have no source of reliable income are undoubtedly a massive resource for any potential drug-lord, so this provides us with another example of how Western intervention in SE Asia during the period following WW2 created ripple effects that easily allowed formation of the first iteration of the Golden Triangle.

An ethnic group within Burma that played an integral role in the formation of the Golden Triangle is the Wa tribe, who are located in the eastern hills of the country in a province known as the Shan State. Shan is and has been a major stronghold of many of the Golden Triangle’s key players. This group has historically had a very mercurial relationship with members of the Golden Triangle, due to the financial and military support provided by the Triangle’s armies, but also the continued scrutiny in later years from anti-drug factions. The political arm of these people is the United Wa State Party, and the military representation is the United Wa State Army. The Wa region plays an integral role in the transport of illicit substances through some of the most important parts of the Triangle’s smuggling routes[11]. Though the army and it’s parent political organization have both only existed since the late 1980s, the founders and leaders of this movement were fundamental in instrumental in ensuring that the traffickers in operation prior to the Triangle’s formation were well protected from the authorities [13,15]. When the KMT arrived in this region following their deposition from Yunnan in 1950, the residents of this area now had the muscle to begin producing heroin at an unprecedented rate, with no repercussions.

Some of the other primary factors that allowed for the formation of the Golden Triangle can be seen in the context of the fallout from the Cold War. These factors include a suitable climate for the growth of opium, the coca leaf, and cannabis; other facets of the phenomena include the sustained existence of legal systems that failed to appropriately punish those who committed certain crimes, and political corruption. The collapse of the 1st world- 2nd world paradigm lead to the formation of the “grey-area phenomenon” or GAP, an area where conflict was suspiciously scarce during the cold war [9]. Another key factor in the formation of drug organizations in SE Asia at this time was the decolonization movement that was sweeping throughout much of the the 3rd world around this time. One may speculate that the focus of many of the most intelligent and civically scrupulous individuals in these nations was too focused on breaking the shackles of Western rule to notice that the development of a massive underground market that spanned across the borders of the most powerful nations on the continent. This hypothesis can be supported by the fact that the most powerful warlord in the Golden Triangle during the last decade of the Cold War (and for for 5 years after it ended), a man by the name of Khun Sa, held a standing army of twenty thousand soldiers along the Thai-Burma border. During this time, it was estimated that Khun Sa was responsible for the production of over half of the world’s heroin supply [21]. Other sources estimate that in the 60s, 70s, and 80s, other half of the world’s heroin supply came from Khun Sa’s main competitor, Lo Hsiung-Han, who also received backing by the US government in exchange for control over specific regions of the Triangle that were considered strategically advantageous and therefore undesirable for communist forces to get their hands on [11]. This lead to the usage of US supplied weapons in conflicts between the Burmese and Thai governments, and also skirmishes between military forces of both of these nations with soldiers from armies who were aligned with

The economic boom of the 1950s seen in the US drove demand for all kinds of goods, and the capitalist mentality didn’t stop at the sale of illegal drugs. This fact coupled with the remnants of Prohibition-era criminal organizations created a launchpad for a flourishing underground market in the States after the war. However, it was not until over 2 decades later, during the onset of the Vietnam war, that American demand for substances like opium began to become even more pronounced. American resources were even used for the trafficking of bulk amounts of opium, as shown by the fact that heroin was even sent stateside from Saigon in the coffins of dead US soldiers [7]. Robins et. al estimated that the percentage of US soldiers who had tried heroin was approximately 34%, with ~10% of those individuals trying the drug at least once more upon return to the US [17, 19]. By the early 1980s, the Golden Triangle had gained too much influence and loyalty in its tri-nation domain to be easily uprooted by any single government. Today, this same black market has grown to billion-dollar per year industry within the US, and has strong ties to the general opioid epidemic that has helped lower the life expectancy for middle-aged Caucasian Americans for the first time in decades.

Figure 3: Confiscated opium.

The history of Western interaction and intervention with the drug trade in this region is bloody and historic, and in the modern context with which we began this analysis, is also very ironic. The policy put forth by the US during the conflicts that perpetuated the formation of the Golden Triangle helped solidify the socio economic power that narco traffickers currently have in this region of the world. The beginning of this discussion on the Asian-American drug trade centered around the opioid epidemic in the US that is partially fueled by the domestic import of fentanyl from places like China. However, it is almost impossible to state exactly how much the Golden Triangle’s role in heroin export and production played a role in setting the stage for this public health crisis. Because of the striking similarities between these two drugs, heroin users will turn to the much more deadly cousin of their substance of choice because of it’s ability to achieve similar effects at a much lower dosage. It is very well possible that if past American foreign policy had been structured differently, the supply of narcotics from the Golden Triangle would have been greatly diminished in the 60s, 70s and 80s, and drugs like heroin might never have become as commonplace as they are in the modern US. The exploration of this topic provides a perfect example of how American foreign policy in other parts of the globe can have unforeseen effects that eventually negatively impact US citizens, decades after government agencies that we trust d make a questionable policy choices.

[1] Associated Press, “DEA opens shop in China to fight synthetic drug trade”, “Telegraph-Herald”,, (accessed January 26th 2017).

[2] Telegraph-Herald,07 Jan 2017.

[3] Reuters, “40 arrested as drug ring is smashed,” 12 Sep 1978, “The Global Mail”, (accessed February 7th 2017).

[4] “The Globe and Mail”, September 12th 1978.

The Secret Army : Chiang Kai-Shek and the Drug Warlords of the Golden Triangle (1)
Gibson, Richard Michael, Chen, Wen H.

Gibson and Wen, Secret Army .

Chin, Ko-lin, The Golden Triangle: Inside Southeast Asia’s Drug Trade (Cornell University Press)

Ko-lin, Golden Triangle .

Chambliss, Williams “Markets, Profits, Labor, and Smack” Contemporary Crises 1977,

Chambliss, Markets and Smack

Southeast Asian and the Golden Triangle’s Heroin Trade: Threat and Response. Chalk, Peter,

Chalk, Threat and Response

Lu, Hong, Miethe, Terance D., Liang, Bin, China’s Drug Practices and Policies: Regulating Controlled Substances in a Global Context (Routledge)

Hong et al., Chinas Drug Practices .

Chouvy, Pierre-Arnaud, “Drug trafficking in and out of the Golden Triangle”, 2013.

Pierre-Arnaud, Drug Trafficking Golden Triangle”.

William O. Walker, Opium and foreign policy: the Anglo-American search for order in Asia, 1912-1954 (Chapel Hill : University of North Carolina Press), 1991.

Walker, Opium

Wayne Hall, Megan Walker, “Lee Robins’ studies of heroin use among US Vietnam veterans.” Addiction Classics 112 2017. Accessed 3/21/2017. DOI: 10.1111/add.13584

Robins, Vietnam

McCoy, Alfred W. Covert Netherworld: An Invisible Interstice in the Modern World System. Accessed April 29, 2017.

McCoy, Covert Netherworld

Geographic focus: Thailand, Burma, Laos.


    Search terms: (China AND fentanyl), drug* trade, black market OR drug market, golden triangle, imperialism, colonialism.

    Primary Source Database: Proquest Newsstand.

    Primary Source Search Date Limiter: Between 1950 and 1975 was the date range that fit best. Between 1975-78 The Global Mail had much information on the power narcotics and military organizations in SE Asia.

    Potential date range for project might be 1950 to 1979.

    Historical Research Questions: What role did colonialism and imperialism play in the formation of cartels in SE Asia? How did the imperialism and colonization create an environment conducive to drug trafficking?

Deforestation Final paper

Download PDF

With humans and the environment there were many issues with this. People have been cutting down trees since the 1800’s.[1] With this they are killing off wildlife and the environment that is helping the world. When cutting down trees, we are cutting off supplies to the wildlife and their homes and taking away oxygen from humans. With globalization connecting to deforestation because once people started settling in North America they started cutting down trees for the usage of building homes and for new resources.[2] There were mass numbers of trees getting cut down. People decided to cut down trees were a good idea because of the items they could make so it started becoming a business and soon many trees were getting cut down. With this deforestation is a problem all around the world but in the main case a very big problem is in Brazil the Amazon forest because they are destroying the land, taking away resources for animals, and making it harder for them to live and for them to have homes.                                    There are many ways on how deforestation can happen. This is including natural deforestation which is caused by natural resources, the weather being too hot, too cold or too dry. If the area is too dry a problem that usually happens are fires. With fires happening they destroy the land because it is so dry and hot that there is no water or anything to stop it. The reason this is natural deforestation because no human has anything to do with this. Also because the plants/trees are going to slowly grow back. With deforestation it is because of humans and the plants/ trees do not slowly grow back.                                                                                                                                                                       There are many problems that are wrong with deforestation. Deforestation wasn’t a huge problem in the 1800’s because not many people decided to cut down trees. But then deforestation became a big problem in major forest areas including the Amazon in Brazil. The major deforestation started in the late 1960’s even knows this is not when it began. With this the Amazon was [3] “approximately 40% of Brazilian Territory and is a region with a low population density” Not caring about the population or any other part of the amazon, people didn’t realize what was going to happen in the future. There are many problems with deforestation in the Amazon, many people did not realize that there was a problem because they did not know the affects it had on the rain forest. Including killing animals homes, killing animals and making it hard for other objects to grow in the forests. With removing parts of the Amazon including the tress, we are destroying the parts in the world that are needed for human and for animals to live including oxygen. There is also a serious climate change with this happening including the Amazon basin was being accelerated by the Brazilian government [4] “which often suffers from severe drought and to open territory for development” With this removing some of the Amazon forest there is a climate change and many things end up changing because people want to take away the land so they can build homes, they need the wood from the trees for many things, building objects for example homes, tables, chairs.

Map of Brazil showing the rain forest

With mankind being known well for deforestation there are many problems [5] “it is not until a man’s pocket book is touched that he becomes aware of the existing danger” This is showing that not until deforestation was a ‘huge’ issue that people cared. Once deforestation started becoming a bigger issue many people started to worry and try and fix this issue because it was destroying the Amazon Forest.                                                                                                                                 With Deforestation there are many horrible things that come out of it. For instance killing off trees/plants, but with killing off theses they are also killing off what the animals have. They are taking away there homes, taking away the places they hide.[6] There are many problems caused by this because with us destroying the habitat the animals live in there is no where for them to live. With people building houses the animals will start to wonder and try and live where the humans live which is not a good idea.[7] There are many animals that are killed throughout the years because of deforestation and their homes being destroyed and they wouldn’t be able to live with out their homes.

Continuing with deforestation [6] “nearly 70 percent of the forests have high population are now down 10 percent in the last 30 years” this is the amount of forest that are being destroyed  ans how many animals and how much the population of animals, plants and specific things that we need in the world.

 there are many different types of issues that come with deforestation because they are killing many things.

All in all there are many problems with deforestation in the world. Causing many issues with that is happening and how many animals, trees, and plans are being ruined. Deforestation has been happeining for many years and is still continuing to this day.

1] Zulfiqar, Mohammad Mohad. “Combatting the menace of Deforestation.” The Nation (AsiaNet),  August 8, 2016,

[2] Cuff, David J. “Deforestation” In Oxford Reference. Published in 2001.

[3] Dejesusparada, N. ; Demoraisnovo, E. M. L. ; Dossantos, A. P. Instituto de Pesquisas Espaciais. Deforestation planning for cattle grazing in Amazon Basin using LANDSAT data – NASA-CR-157907. Rome, Italy: UN/FAO Training Course on Remote Sensing Application, 1979

[4] Friedman, Irving . The American Basin, Another Sahel? Vol. 197. American Association For Advancement of science, 1977.

[5] “Forest Saving as a Necessity,” New York Times, November 21, 1920

[6] Green, Glen M. ; Sussman, Robert W. “Science: Deforestation history of the eastern rainforests of Madagascar from satellite images,” Vol.248(4952). Published in April 13, 1990.

[7] MARVINE HOWE Special to The New,York Times. (1974, Jun 02). Conservationist stirs furor in brazil. New York Times (1923-Current File) Retrieved from

Ivory Trade: The Devastating Impact It Has on East African elephants

Download PDF

Most African elephants are facing local extinction due to illegal killings for their ivory tusks. Last year in Kenya a pilot was flying over Tsavo East National Park when he spotted eleven dead elephants, their tusks hacked off, while the rest of the survivors were moving among them. The elephants were killed by an armed gang of Somali poachers. According to an article by Catrina Stewart, Tsavo, Kenya “had 35,000 elephants in the late 1960s, but by the late 1980s, elephants numbered just 6,500, an 80 percent fall.” [1] Organized and heavily armed gangs are targeting the elephant population, while conservationists are doing whatever they can to save the species. The poachers are even tempted now more than ever since the price of ivory went up. The elephant species is going extinct, but the demand for ivory from Asia, particularly China, doesn’t seem to stop anytime soon. Kenya Wildlife Service is working to preserve elephant habitats and guard the Tsavo East National Park with armed rangers. [2] Since then, Tsavo’s elephant population did partially recover.

In the last 200 years elephants’ population kept declining because they are being killed for their ivory or simply for the sake of sport. The value of ivory gave it the nickname white gold. The poaching of elephants dates back to ancient times. Elephant tusks, preferably tusks from east Africa were used to make, jewelry, musical instruments and more before the invention of plastic. The idea of ivory trade has changed over time when we realized that the elephant species is being killed at an alarming rate, people began a movement to ban ivory trade globally. They succeeded in way except in china. But the ban hasn’t stopped the illegal poaching. The practice still continues to this day and age. The continuous demand for Ivory directly contributed to the extinction of the elephant species. Although killing elephants for their ivory or for a trophy is a tradition that dates back to ancient times, it is now illegal and poachers need to stop killing these innocent animals because the species is in danger.

Hunting has always been a core tradition in Kenya, Tanzania, and other parts of Africa. The book “Black Poachers, White Hunters: A Social History of Hunting in Colonial Kenya” by Edward Steinhart states that prior to European colonization “Elephants were hunted for food as well as for their tusks, then increasingly for their ivories only.”[3] The ivory then was sold to Arab merchants. Eastern Kenyan hunters known as Waata used longbow and iron-tipped arrow to hunt elephants. Hunting played an important role in Tanzania as it did in Kenya. Uzigua is located in northeastern Tanzania. Pre-colonial Uzigua supplied livestock, ivory slaves to neighboring and distant settlements in exchange for cloth, beads and copper wire. The farmers used to trade and their difficult agronomic system to prevent crop loss in the time of drought. They also traded with migrant hunters from Kenya and southern Tanzania. Another book called The Politics of Environmental Control in Northeastern Tanzania: 1840-1940 by James Giblin talks about how in the last pre-colonial decade’s more migrant hunters “became more numerous in Uzigua”[4] because at that time ivory trade didn’t have a restrictive policy to follow. Also the demand for ivory increased overseas because it was used for many purposes.

Figure 1: display jewelry and carvings made out of ivory in Manhattan, Network, 2012.

According to “Ivory in World History – Early Modern Trade in Context” by Martha Chaiklin, ivory was used in ancient Greece, Rome, and Egypt for jewelry, figures, and boxes. In the Middle Ages ivory was mainly used “for religious objects so it is fitting perhaps that in the age of exploration, they often used it for navigational and other scientific instruments.”[5] Ivory was the material choice for scientific and medical instruments because compared to wood it is more resistance to shrinkage and swelling. It was also easier to read since it has gleaming white surface.  African Ivory was also imported to the continent of Asia. Asia provided its own ivory until the early modern period, but in “the seventh century East African ivory was also exported to India and china because the demand was so high in these countries” [6]. The reason African tusks were in demand was because African ivory was bigger in size than Asian ivory.

The European market preferred East African ivory because it was cheaper than the ivory from southeast Asia. The book “Journal of African History” by R.W Beachey states that the East African ivory was also soft, which is great for carving. Ivory trade over-topped all trades, even slave trade at the time, which made East Africa the leading “source of ivory in the world” [7].  Extinctions of the elephant species is the outcome of ivory trade that existed for centuries.

Figure 2: Ivory trade in East Africa, 1880

Beachey argued that the East African ivory trade is very ancient and it is given more importance than the slave trade by early geographers and travelers. Arab merchants exported ivory from the East African coast throughout the early and later middle ages. But “the great development of the East African trade took place”[8] in the nineteenth century. The demand for Ivory from Europe and America increased. Ivory was the number one export until the end of the century. Westerners were no only interested in the value of ivory, they were also interested in a type of sport called trophy hunting.



Figure 3: Trophy hunting of elephants in Africa, 2011

An article titled “Trophy Hunting in Sub-Saharan Africa: Economic Scale and Conservation Significance” by Peter A. Lindsey claimed that trophy hunting by European settlers and explorers was uncontrolled and had a harmful impact on wildlife species like elephants. In the late 19th century people noticed that the elephant species was going extinct so they decided to preserve it and came up with controlled hunting and revenues from trophy hunting will go to wildlife conservation.  “During the early 20th century, the tourist trophy hunting industry started in Kenya, wealthy European and American visitors paying settler farmers to guide them on hunting safaris in the area.”[9] Later on tourist hunting industries progressed in other parts of Africa as well. Lindsey also argues that “trophy hunting has created financial incentives for the development and/or retention of wildlife”.[10]   The idea of killing an animal for sport could save the species might not make sense but her argument is that the money gained from the trophy hunter will go to wildlife conservation.

A New York Times article talks about what World Wildlife Fund (WWF) is doing to save the elephants species in East Africa, specifically in Tanzania. It states that WWF has provided vehicles for anti-poaching units to arrest poachers. Poachers are responsible for the decline rate of the elephant’s species. Unspoken assumption that could be drawn from this article is that rangers that guard the elephants have to fight the poachers, which could even result in death. Hundreds of wildlife rangers have been shot while trying to protect the animals. The article also states that “In 1975 alone, 423 poachers were arrested and 3000 snares were confiscated.” [11] Arresting the poachers would be more effective if ivory trade is banned in every country.

A journal article called “The Perilous Future of the Elephant” written by Science News in 1977 claims that ivory sale should be banned in the United States since the elephant population is decreasing. It says the reason why elephants are still being targeted is because the price of ivory is increasing. The article also discusses how pulling out elephants from their natural habitat and forcing them to live in the zoo or national parks is harmful. These parks are overcrowded so the elephants would eat all the grass and the leaves, which will then lead them to starvation. Then the starvation and poaching together would cause the population to decline. [12] So they suggest that government to take action before elephants are completely wiped out from some areas.

The Convention on International Trade in Endangered Species (CITES) banned international commercial trade in African ivory in 1986. After the ban, the demand for ivory in the US decreased and some ivory carving and shops closed down in china and Hong Kong. Yet CITES allowed a “one-off sale of tons of stockpiled ivory from Botswana, Namibia and Zimbabwe to Japan in 1999. [13] After that the ivory market sale went back to the rise again. In order to prevent this from happening again, it’s a must to ban ivory trade without exception internationally. To show that some countries like the United States and Hong Kong destroyed their stockpile of confiscated ivory. The purpose of this act was to send a message that ivory sale has no place or value in the US and to inspire other countries to do the same. The greatest threats to elephants are humans but we also have the ability to save elephants if we unite to ban poaching. According to this organization countries all over the world are being urged to destroy their stockpiles of ivory and put a ban on its trade. If these steps are taken the African elephants’ population will eventually recover.


[1] Catrina Stewart, “POACHING THREAT IS DEADLIER THAN EVER,” The Independent London, January 03, 2014, (accessed January 20, 2017).

[2] The Independent London, January 03, 2014.

[3] Steinhart, Edward I. Black Poachers, White Hunters: A Social History of Hunting in Colonial Kenya. Oxford: James Currey, 2006: 9

[4] Giblin, James L. The Politics of Environmental Control in Northeastern Tanzania: 1840-1940. Philadelphia: U of Pennsylvania, 1992: 28

[5]Martha Chaiklin, “Ivory in World History – Early Modern Trade in Context,” History Compass ,8/6 (2010): 535

[6]History Compass ,8/6 (2010): 535

[7] R.W Beachey “The East African Ivory Trade in the Nineteenth Century” Journal of African History, 8, 2 (1967): 269

[8] Journal of African History, 8, 2 (1967)


[9] Peter A. Lindsey “Trophy Hunting in Sub Saharan Africa: Economic Scale and Conservation Significance” (2008): 41 

[10] “Trophy Hunting in Sub Saharan Africa: Economic Scale and Conservation Significance” (2008): 41

[11] New York Times, New York, 1979

[12]”The Perilous Future of the Elephant.” Science News 111, no. 21 (1977): 327.

[13] “CITES National Ivory Action Plans.” CITES National Ivory Action Plans | CITES,


Figure 1. Illegal-Ivory Bust Shows Growing U.S Appetite for Elephant Tusks. 2012.

Figure 2. Ivory trade in East Africa, 1880.

figure 3. killing for sport, 2011


The Conflicts of the French Slave Trade and Human Trafficking Today

Download PDF

As of 2016, the newest information provided states that, “Each year about 2.5 million victims, mostly women and children are recruited and exploited worldwide.” [1] France has been doing its best to fight against this concern, but despite that, human trafficking is the, “. . .third most common form of trafficking in the world next to drugs and arms. . .” [2] In addition to this, human trafficking generates more than Rs. 2,40,000 crore every year. In US currency, one crore is 10,000,000 rupees (Indian currency), which is $160,000. This means that in total, human trafficking makes more than $38,400,000,000 every year. This might seem like an excessively large amount of money, but considering that there are around 2.5 million victims annually, the price adds up. France is especially worried about this crime because it is one of the top ‘organized crimes’ committed, among others. Especially since France is a frequently used country to transport and sell victims—it is in the middle of Europe; traffickers can take them from countries in the south and move them to buyers in the north. In hopes to curb the rising rate, France has taken to the implementation of the, “Palermo Convention which aims to prevent, suppress and punish trafficking in persons, especially women and children.” [2]

Human trafficking is a global problem, and not only is the atrocity being committed worldwide, but it also isn’t something that is new. Before human trafficking (similar in context to human smuggling, but different in regard to how those who are being brought across borders are treated) was even called human trafficking, it was called salve trading. The only differences between then and now is that then slaves were generally Africans, and now anyone can be trafficked—especially women and children as they are usually sold as sexual objects. In addition to that fact, trafficking is an illegal activity and before the Abolition, trading was considered common and legal. While the act itself is horrible, it has economic ramifications as well; bringing such a large quantity of people over borders to sell is just like illegal immigration. Whether the victims want to cross borders or not doesn’t change that title, the fact that they are now unregistered people in a foreign county makes them immigrants. In the end, these thousands of unwilling illegal immigrants affect the country they’re being brought into—and in this case, it’s France’s.

Post-modern day human trafficking can be referred to as slave trading. France supported the slave trade with much gusto, sending off around 4,200 voyages (via ship) transporting a total of 1, 250,000 slaves. [3] Considering the transportation and resources available in the period, this was a massive amount, even though when compared to today’s human trafficking total of 2.5 million people it doesn’t seem as much. Figure 1 depicts the cramped and inhumane conditions existing on a slave ship. While nations like US and Britain had gone through the abolition, other nations—like

Figure 1 Interior of Slave Ship, Vigilante, 2017. [5]
France—continued to practice slavery. [4] The French turned almost four times as many Africans into slaves as other slavers, and that was probably helped by the fact that the French started slaving before others, and they stopped around 1830 where others stopped before that. France’s leading slave port was Nantes, which by itself carried 55,000 slaves in 180 ships. [6] The French didn’t limit taking their slaves from just Africa either, they took slaves from places like Sumatra, Nias and other French colonies. Since slave trade was so booming for the French, the number of slaves in France grew and grew (primarily Africans).  However, there were restrictions about how long/how they lived in France, dictated by the Code Noir. Slavery in France continued until 1848. Evidently, France was the staunchest supporter of slavery, and that reflects in how they’re one of the most involved countries with human trafficking today.

Today, trafficking, like smuggling, is a form of immigration, and when countries continue to take in immigrants it can get difficult to compensate for the rapidly increasing population even if the immigrants are documented. As clarification, many human trafficking victims originally were people who hired others to smuggle them out of their country—normally during crisis—but some of the smugglers didn’t bring the immigrants to the intended destination, but to another place to sell them. This resulted in illegal immigrants becoming victims “Traffickers frequently take away the victims’ travel and identity documents, telling them that if they attempt to escape, the victims or their families back home will be harmed, or the victims’ families will assume the debt.” [7] Figure 2 depicts a semi-current census on how many victims were found in Europe. These victims were specifically Nigerian, and while France isn’t the top polling, many

Figure 2 Nigerian Victims of Human Trafficking by EU Country, 2017. [8]
victims were still trafficked into France. [8] Unfortunately, there are more than just Nigerian victims and this report covers only those saved. Illegal immigration started in France during the first oil shock in 1974, and since then, “. . . illegal immigration has become a constant feature of French political and social life. . .” [9] Around 200,000 and 400,000 are in French territory and 80,000 to 100,000 more come every year, [10] that’s many more mouths to feed and people to house. Sadly, among those immigrants, over 20,000 of them were trafficked and not smuggled.

Unlike human trafficking, the slave trade was not a hush-hush operation and in fact, most slaves were either kidnapped, “There is a great reason to believe, that most of the negroes shipped off from the coast of Africa, are kidnapped,” [11] as it happens today, or they were sold as prisoners of war. Alexander Falconbridge, a member of the growing abolitionist movement—those who are against slavery—studied the manner in which slaves were acquired and documented it in his book, An Account on the Slave Trade on the Coast of Africa. Slave traders had several means to acquiring slaves; they snatched them off the streets, used animals to hunt them down, or as stated, received them from other clans as prisoners of war–which was a very effective way of getting income for many clans. For those who weren’t sold as prisoners of war, there was no way of knowing who was going to be victimized and captured. “. . . a negroe informed me, that being one evening invited to drink with some of the black traders, upon his going away, they attempted to seize him. . . he was prevented from effecting his escape by a large dog. . .” [12]. Many slave traders used trickery to capture Africans; “The unsuspicious countryman readily consented, and accompanied the trader in a canoe to the side of the ship. . . black traders of board, who appeared in secret, leaped into the canoe, seized the unfortunate man, and dragging him into the sip, immediately sold him” [12].

Because trickery was used to take slaves, and African’s sold their own as slaves, there was no way of knowing that if anyone was safe from traders. “[Slaves sold at a fair] consist of those of all ages, from a month, to sixty years and upwards. . . Women sometimes form a part of them, who happen to be so far advanced in their pregnancy, as to be delivered during their journey. . .” [12]. The book, Children in Slavery through the Ages states that boys (négrillons) and girls (négrittes) were usually defined as fourteen years of age and younger. [ 13] Figure 3, titled Woman and Child on Auction Block, is a sketch done by an unknown artist in the 1800’s, it clearly depicts how white man did not care who they took

Figure 3 ” Woman and Child on Auction Block,” 1800. [14]
and sold, as long as they profited. [14] Considering that this form of human trafficking was accepted, slave trading could be considered worse than modern day human trafficking: this wasn’t a secret operation, people were taken off the streets, and they were sold immediately in fairs. The fact that the French took almost as much as four times the number of slaves Americans did and that they were heavily involved in the vending of slaves, it is without a doubt that French slavers did this to many of the slaves they sold. Unfortunately, slaves were a popular commodity in France, but at least France can’t be blamed as the only culprit for this horrid affair. As an abolitionist, Falconbridge didn’t approve of slavery let alone how slaves were acquired. Assuming Falconbridge is a wealthy man—as he has the money to travel and the money to publish a book—his information would have more impact and influence on those who were abolitionists as well. Falconbridge intends that his book gets to those who thought they get their slaves by legal means, and that if people understood how these slaves are acquired, then they might have stopped purchasing them–additionally, for those who were against slavery, this is could have been another means of rallying the troops to fight for abolition.

As stated previously, over 20,000 illegal immigrants were originally intended to be smuggled over borders but they were taken advantage of by those who smuggled them. Illegal immigration, aided by human traffickers and smugglers, negatively affects the country that the immigrants reside in. When it’s legal, the economy is better at managing the negative consequences, but when it is illegal, that’s when the problems are harder overcome. Either way, immigration in large numbers generally does not have a positive effect on the country. “Immigration and its consequences are among the most important social and political issues. . .” [15] Illegal immigration—in this case by trafficking—negatively impacts the country in ways such as jobs being taken up, taxpayers resources being used, and terrorism increasing. It also causes problems that have often led to hunger strikes and protests. For example, most immigrants get jobs where employers don’t care if they have a VISA or not. These employers normally treat their employees horribly: bad hours, bad pay, bad working conditions. The only way these immigrants can improve their working conditions is by striking. These negative impacts get worse and worse depending on how many immigrants are moving in, and currently, “. . . France [has] the world’s highest rate of immigration, 515 per 100,000 inhabitants.” [16] Although much has been put into place to stop illegal immigration and stop the consequences once it happens, France still does suffer.

Illegal immigration is—obviously—illegal, slave trading was not. However, when a Frenchman wanted to purchase a slave and bring it back to France with him, he had to register the slave as his with the government. If the Frenchman failed to do this, then if it was found out or the slave loudly announced that he was free/he wanted his freedom, then legally, according the law in France, the slave wasn’t purchased and isn’t that Frenchman’s slave. If the slave wanted to, it could find a way back home. These laws can be found in the Code Noir, which is the Edict Concerning Negro Slaves and it was issued by Louis XV. During the Triangle Trade, the exact number of Africans in France (free or enslaved) was between 4,000 and 5,000 entering and leaving the country. [17] The Code Noir was “. . . originally introduced to regulate the life of slaves and freedom in France.” [18] It stated that slavery was necessary and authorized and said that slaves were property. (Like how human trafficking victims are treated.) The Code Noir was unethical—in today’s views—but it was also there to protect slaves as well, from mistreatment from whites. In regard to having the register the slave, it was not necessarily for the slave’s protection, but so the slave owner to bring the slave to France without losing his property. “In order, to maintain these property rights the slave owners were required to follow the procedure outlined by the Edict of 1716.” [19] The Edict of 1716 stated that slave owners were required to obtain permission of the governor of the colony to bring the slaves to France, and then when the slaves were brought, to register them in Paris at the Admiralty. If the slave owner didn’t do this he was subject to large fines and loss of his slave, bringing slaves without registering them was illegal immigration. Bringing slaves to France was a form of immigration, and with this immigration the population increased, while back then the increase of population didn’t cause the economic problems it does today, it was the stepping stone of modern day human trafficking.

A prime example of a slave gaining freedom in France because his owner didn’t register him is that of Francisque. Francisque was a non-African slave brought to France via his owner Sir Brignon. Francisque worked many years and he managed to make enough money to buy his freedom, however, Sir Brignon denied his request to leave and it was brought to the Parliament of Paris. During the ruling, it was realized that Francisque was never actually registered and the verdict was that Francisque was free and that Sir Brignon had to pay him “. . . 800 livres for eight years’ back wages, plus 200 livres in interest and damages for his imprisonment during the trail.” [20] Francisque is an example of historical illegal immigration.

Slave trading in France was common, and not properly documenting your slave resulted in having to pay heavy fines to the previous-slave and the government, however, the increase in population was not so big that it caused economic crisis. Despite that, illegally having a slave was like human trafficking today. The difference is that the negative impacts of illegally owning a slave and human trafficking is that for the slave, the owner must pay the consequences, and in trafficking, the consequence is much grander: the impact is more severe and it affects the whole country and not just the owner. Slaves then and victims now were treated similarly as well:  they were used sex slaves/workers, both were/are treated cruelly and unfairly against their will. Despite all, modern day slave trade is a globalization problem, especially in France where it is the #1 organized crime, it is extremely prevalent. Not only does it hurt the victims, but the economy in regard to how it not only increases/promotes organized crime, but it increased the population of illegal immigrants. Human trafficking is calamitous on all levels.



[1] P. Joseph Victor, “Human Trafficking a Major Concern: France, The Hindu, April 04, 2016, (accessed  January 19, 2017).

[2] The Hindu, April 04, 2016.

[3] Philip D Curtin, 1972. Atlantic Slave Trade: A Census (Madison, US: University of Wisconsin Press, 1972),166-179.

[4] “Interior of the Slave Ship Vigilante.” Sea of Liberty. April 05, 2017. Accessed April 29, 2017.

[5] Schomburg Center for Research in Black Culture, Photographs and Prints Division, The New York Public Library. “Interior of Slave ship, Vigilante.” New York Public Library Digital Collections. Accessed April 29, 2017.

[6] Curtin, Atlantic Slave Trade: A Census,163 & 168.

[7] “Human Trafficking and Smuggling,” last modified January 16, 2013, accessed March 21, 2017.

[8] Dqlepiz. “Nigerian victims of human trafficking by EU country (2010-2012).” Atlas. April 05, 2017. Accessed April 29, 2017.

[9] “How Many Clandestine Immigrants in France?,” last modified April 13, 2006, accessed March 20, 2017.

[10] “How Many Clandestine Immigrants in France?”

[11] Alexander Falconbridge, “The Manner in Which the Slaves are Procured 1788.” An Account of the Salve Trade on the Coast of Africa, (HathiTrust: University of Michigan, 1792), 13.

[12] Falconbridge, “The Manner in Which the Slaves are Procured.” 13.

[13] Campbell, Gwyn Miers, Suzanne Miller, Joseph C.. 2009. Children in Slavery through the Ages. Athens: Ohio University Press. Accessed April 28, 2017. ProQuest Ebook Central. 37.

[14] Schomburg Center for Research in Black Culture, Photographs and Prints Division, The New York Public Library. “Woman and child on auction block.” New York Public Library Digital Collections. Accessed April 29, 2017.

[15] “Immigration in France and the United States: A Comparative Study of Its Significance, Causes, and Consequences.” Bulletin of the American Academy of Arts and Sciences 42, no. 4 (1989): 5. doi:10.2307/3823138.

[16] “Immigration in France and the United States: A Comparative Study of Its Significance, Causes, and Consequences.” 7.

[17] Chatman, Samuel L. “”There Are No Slaves in France”: A Re-Examination of Slave Laws in Eighteenth Century France.” The Journal of Negro History 85, no. 3 (2000): 144. doi:10.2307/2649071.

[18] “”There Are No Slaves in France”: A Re-Examination of Slave Laws in Eighteenth Century France.” 145.

[19] “”There Are No Slaves in France”: A Re-Examination of Slave Laws in Eighteenth Century France.” 146.

[20] “France’s Freedom Principle and Race, 1759,” in Sue Peabody and Keila Grinberg, Slavery, Freedom and the Law in the Atlantic World (Boston: Bedford/St. Martin’s, 2007), 45.








Geographic Focus: France (including Nantes, Bordeaux, La Rochelle, Le Havre, Saint Malo, Lorient, Honfleur, Marseilles), Africa, America.

Search Terms: (slav* AND France) also, prostitut*, smuggl*, trad*, consensus, immigra*, “Code Noir”.

Primary Source Database: Digital Public Library of America

Primary Source Search Date Limiter: before 1979. Date range was between 1800’s-1979. Also 1987

Historical Research Questions: How does the slave trade roll into human trafficking (did the slave trade just never stop but just become quiet?)? Would the economic impact of having slaves in France during the slave trade have increased if the Code Noir wasn’t issued?


Advancement on Women’s Rights in Yemen

Download PDF

The concept of human rights has been a privilege given only to those who are the “superior” human race. Human rights have been denied to those labeled as the “other” groups and this action been justified through religion and culture. Those seen as the “other” groups specifically target women, among a few other individuals. Women tend to be less than a man and are not given the same rights as men. More specifically women in the Yemen region do not have actual rights due to them not having the same value as men. The lack of women’s rights is justified through the culture and religious beliefs practiced in the Yemen region. For the longest time women have been inferior to men in Yemen and Muslim regions, however over the years’ advocates for women’s right have been creating movements for change to occur in these regions. The women living in the Yemen region are being oppressed due to the Islamic Law. Women in the Yemen and Islamic regions continue to face many issues of lack of human rights and oppression, from previous years to present day. North Yemen became independent from the Ottoman Empire to advance and have better opportunities, however this was not case for the women in the Yemen region, who are fighting and advocating for their human rights. [1}
There is clear difference of statuses of men and women in the Muslim region, when being compared to other civilizations specifically in comparison to Western civilizations. A major difference between these two civilizations is placed on Muslim regions following the Qur’an or the shari’a, which to them is the holy law. This tends to create friction because of how against Muslim regions are in borrowing another region’s customs and practices. Women in the Muslim and Western civilizations are given different statues based on that region’s laws.

Islamic Law in Pakistan – Global Legal Collection Highlights
The Islamic Law has placed many restriction on Muslim women due to the myths that have been generated concerning women. A common myth surrounding women in the Muslim region would be that women are perceived as evil due to being labeled as sexual temptations. Another important factor as to why women are inferior to men in the Yemen and Muslim regions is because women are a man’s property. The men can be polygamous with the justification that, “male polygamy does not bar us from knowing who the father is, but female polygamy would” [2]. Women are therefore seen as having to be controlled and most importantly have little to no contact with males outside of family.
Women in the Yemen and Muslim regions are struggling with having their voices being heard. Women activists have been changing strategies and tactics in hopes of being able to transition into the modern world. However, the women are being ignored in their fight for gender-equality. What many countries fail to see is that due to the state and religious affiliation women are becoming victims to the laws put in place. The hope is that through continuous activism and support of women rights in Muslim regions the isolation, gender discrimination, and other forms of inequality will no longer be emphasized. Changes are being made in hopes of helping to provide Yemen the push it needs to become more modern, especially regarding the women. For example, a popular program established was the Girls World Communication Center (GWCC) in hopes of encouraging and empowering young women to can pursue higher education and careers. Nearly after two decades the Republic of Yemen had granted the women the right to vote and receive higher education. Women in the Yemen region are heavily politicized because “the rights of women and the status of women are vociferously debated by men” [3]. However, women activist are labeled as controversial due to the idea of women having equal rights as men is viewed as preposterous. The Yemeni Scholar’s Body makes the claim that women are not a man’s equal and therefore women should not be guaranteed protection [4]. The Yemeni Scholar’s Body claim that the Western calls is simply corrupting women despite article 40 under the constitution. Women’s Right activist Fatima Salah goes on to explain on how the Yemeni law does not entitle women to the same rights as men because they do not have a soul. The Women’s National Committee is fighting towards changing the Yemeni law.

Women of Protest: Photographs from the Records of the National Woman's Party

For about decades there been an increase in what is called the veiling and seclusion, which is used to ensure the women are living “proper” lives. This practice was mostly seen in the Northern Yemen region because of conservatives using religion to justify the little to no human rights given to the women. However, women living in the Southern Yemen region, there has been a significant change the types of jobs women are being allowed to have. If given the choice many women would choose being able to have job over living “proper life”[5] .

National American Woman Suffrage Association Collection
North Yemen had gained independence from the Ottoman Empire as a way to create their own societies with new laws to benefit the individuals living in Yemen, however the new societies and laws created had continued to oppress the rights of women. Women are still fighting for their voice to be heard and to be seen and accepted as a man’s equal. Although women advocates have helped Middle Eastern regions to become more advanced, there is still much to be done for women to have more rights given to them. Yemen as well as other Middle Eastern regions are not yet fully modernized and only with time they will continue to make advancements in hopes of women being given more rights.

[1] Provence Micheal, “Ottoman Modernity, Colonioalism, and Insurgency in the interwar Arab East”. International Journal of the Middle East Studies, Vol. 3, NO. 2 (May 2011) pp. 205-225
[2] Keddie, Nkki R. 1990. “The Past and Present of Women in the Muslim World.” Journal of World History 77-108.
[3] Yadav, Stacey Philbrick. 2009. “Does a Vote Equal a Voice? Women in Yemen.” Middle East Report 38-45.
[4] Al-Azazi, Abdulrazaq, “Women’s rights advocates: secure women’s rights through the constitution,” Yemen Times, Oct. 15, 2013, (January 19,2016)


Figure 1

Figure 2

Figure 3

South Africa Apartheid Final

Download PDF


In 1948, a policy determined the future of South Africa. The two different groups that inhabited South Africa at the time were feuding to determine where the power lies. The Afrikaners came from the South, while the Bantu came from the North. Although both groups arrived at the same time, neither of the groups were willing to negotiate who gains control. The idea of discrimination based on race was introduced when the Afrikaners were convinced that discrimination was necessary for building a prosperous community. The Bantu population disagreed with this allegation, noting that discriminatory measures were not necessary.

The policy was called Apartheid to represent the separateness and societal development between the races. But the question still stands, what contributing factors lead to the apartheid? To reveal the answer, nearly half a millennium of history explains the rise and decline of the apartheid. Through analyzing the construction of the apartheid, the roots of racial discrimination begin to unravel. The idea of systematic segregation occurred due to ancestral racial differences and sociological conditions. There are two different elements to the apartheid. The first, known as the petty apartheid, refers to the racially motivated laws that adhere to everyday life. The second, called the grand apartheid, is described as the regions where different races were allowed to reside. [1] Both types of apartheid hinder and degrade human rights for the inferior group.

It was in 1488 when the first Portuguese expedition explored the Cape in South Africa. When explorers approached the land, the local natives began to defend themselves from the invaders. This would be known as the first recognition of another race in South Africa. [2] Two centuries had passed and Dutch colonists controlled the land. Not only did the colonists overpopulate the Cape, they also introduced slavery to South Africa. Their attempts to maintain the peace and refrain from enslaving the natives of the Cape caused the Dutch to retrieve slaves from West Africa and Angola in 1658. [3] After the Dutch colonists hired the natives for labor alongside their slaves, they began to treat them the same. The cause of this maltreatment stems from the physical and biological similarities between the natives and the slaves. They both possess darker skin pigmentation and lived under harsh conditions. Prior to Dutch colonization, the natives were living as hunters and gatherers in bands and tribes with their own livestock. Their minimalist society converted to a labor demanding domination controlled by white power. The colonists deemed the natives as easily controllable and uneducated in contrast to the Europeans. They believed the Cape and its people needed their help in order to survive and thrive. During the Dutch colonization, the White population had increased, outnumbering the natives six to one in 1688. However, for the 650 slave owners, there were 25,000 slaves in South Africa by 1798. [4]

Approximately seventy years before the Apartheid was announced, the white rule government was persevering the gap between the races. The white rule bounded the South Africa Act in 1910, stating that non-whites have little to no say in the elections, depending on region. While 8 million native South Africans were allowed to vote for three European officials, 2 million Europeans were able to elect 150 officials. [5] Because of the increasing number of the white population coupled with the laws of supply and demand, there began to be poor white people. With this disadvantage to the white community, the government made it their priority to give opportunities to poor white people only, despite the fact that the black population was struggling as well. Without regard to many more poor Africans, the white rules rehabilitated and reintegrated the poor white people back into the prestigious white community. “Poor whites and poor blacks – are generally treated separately from each other.” [6] In 1913, the Native Land Act was formed, which divided the country along racial lines; whites took 93% for themselves and left 7% for all the other races residing in South Africa. The colonial towns had created racial integration, which was seen as racial pollution. As white people continued to seclude themselves, a racial phobia emerged. [7] The colonists were terrified of the natives and they were afraid of losing power to them. Ten years later in 1923, the Urban Areas Act was formed noting that whites and non-whites could not live in the same areas of the country. By doing so, the government maintained political and economic power. They knew “their wealth was built on the poverty of the other races.” [8] Throughout the history of the apartheid, there are three different stages recognized. The first stage was acknowledged to be from 1948 to 1959 and was described as embedding European power, while instilling discrimination within the South African community. The second stage was between 1959 and 1966 that would be known as separating the developing world between the races, which would contribute to the apartheid. The final phase was over the course of 1966 to 1994 when the idea of apartheid became normalized and increasing in power. [9] As time went on, the white rule government began to decompose.


Figure 1. The Division Council of The Cape designating a White Area only.


The Group Areas Act of 1950 deemed all unnecessary contact unproductive between different races. This act was to ensure that race-to-race conflict would be minimal. This idea was demonstrated in the Reservation of Separate Amenities Act that was passed in 1953, which visually separated whites from non-whites in public spaces. This example of a petty apartheid dominates the main argument that racial differences are one of the causes of the apartheid. In 1962, Nelson Mandela, a prominent leader in the South African community, was to stand trial for coercing individuals to protest illegally as well as leaving the country with an invalid passport. Mandela’s claim confronted white authority about the inequality within their society. “All the rights and privileges to which I have referred are monopolized by whites, and we enjoy none of them.” [10] In his defense statement, Mandela continued to argue that the apartheid was built on false morals, disregarding human rights. After Nelson Mandela had attempted to reapply the values of human rights, South Africa residents began to protest for their natural rights as humans. This was the beginning of the South African uprising and the decline of the white empire. Beginning in 1960, Resolution declarations were issued to preserve the South Africa community from the apartheid. Resolution 1514 on the Declaration of Independence of the colonized countries and people states, “…domination and exploitation constitutes a denial of fundamental human rights.” [11] To halt the progress of the apartheid, South Africa’s people had to first be acknowledged as having more potential than solely being colonized. Resolution 32/105 of 1977 emphasizes the people’s right in South Africa as a whole, “irrespective of race, colour or creed, to determine, on the basis of majority rule, the future of South Africa.” [12]


Figure 2. A protest poster portraying the potential to overcome an Apartheid Parliament.


Within the 46 years of white control, South Africans began to gain control. In 1990, Nelson Mandela was released from prison and would later be elected as President. He was released with support from the South Africa community in regards to the progress he made. In 1991, the Abolition of Racially Based Measures Act demands for the removal of any racial motivation in other laws. During this year, all remaining apartheid laws would be repealed, especially the Group Areas Act and Population Registration Act. These were the last group of laws to erase white power’s apartheid progress. [13] The future of South Africa was finally in the people’s hands.

After the final remnants of the apartheid diminished, Nelson Mandela dedicated his life to protecting and enforcing human rights in South Africa and around the world. Because South Africa transitioned from the apartheid and assumed peacetime so seamlessly, they were seen as an empowering nation around the world. South Africa established itself as a powerful role in global relations. In May of 1994, South Africa joined the Organization of African Unity and the Non-Aligned Movement as well as rejoined the Commonwealth of Nations after being denied membership for thirty-three years. Later, South Africa was even able rejoin the U.N. General Assembly and United Nations Educational, Scientific and Cultural Organization after forty years. [14]


Figure 3. A news article about Nelson Mandela’s death and his contributions to South Africa. Los Angeles Times, 2013.


Although the apartheid only lasted forty-six years, its roots can be traced back half a millennium. The South African Apartheid stemmed from racial differences and sociological conditions between the European colonists and the native people. The natives were hunters and gatherers living in bands and tribes before colonists recreated their land and used them for labor. Because the natives seemed easily controlled and low maintenance for survival, the colonists took advantage of their land and culture in order to facilitate their own gains. Not only did the colonists believe they were helping the natives; they thought the natives and land needed them to succeed and flourish. The colonists introduced slavery to South Africa since it was important to preserve the peace without enslaving the natives. The white population continued to increase as well as the slave population. As the colonial towns increased in population, tensions between the Europeans and South Africans began and laws were enacted to segregate the two. The laws prohibited the two races from utilizing the same public spaces and residing in the same locations. Once the laws were enacted, law protected the racial discrimination and mistreatment. Thus, the apartheid began. With lawful apartheid came protests and resistance to the discriminatory laws based on race. The rich history of South Africa represents the struggle between racial discrimination and equality for humankind.




[1] Roger Beck, The History of South Africa The History of South Africa (West Port, Connecticut: Greenwood Press, 2000). P. 129.


[2] Beck, p. 25.


[3] Beck, p. 28.


[4] Beck, p. 28.


[5] New York Times, Racism in South Africa (ProQuest Historical Newspapers: The New York Times, 1950), p.29. Retrieved from


[6] Robert Ross, Anne Kelk Mager, & Bill Nasson, The Cambridge History of South Africa (Cambridge, New York: Cambridge University Press, 2011). P. 254.


[7] Ross, Mager, & Nasson, p. 260.


[8] David Downing, Witness to Apartheid in South Africa (Illinois: Heinemann Library, 2004), pp. 4-15.


[9] David Welsh, The Rise and Fall of Apartheid (Jeppestown, South Africa: Jonathan Ball and Charlottesville, Virginia: University of Virginia Press, 2009). P. 101.


[10] Nelson Mandela, Defense Statement: Nelson Mandela Papers, (1962-1964) retrieved from


[11] David Welsh & E. Spence, Ending Apartheid (Edinberg: Pearson Education, 2011).


[12] Enunga Reddy, “Apartheid, South Africa and International Law” (United Nations Centre against Apartheid: Notes and Documents, 1985), retreieved from,%20South%20Africa%20and%20International%20Law.pdf.


[13] David Welsh & E. Spence, Ending Apartheid (Edinberg: Pearson Education, 2011).


[14] Beck, 45.

[15]Paul Maylam, “The Rise and Decline of Urban Apartheid in South Africa,” African Affairs 89, 354 (1990): 57-84.



[16] Lyndsey Chutel, “African Migrants in South Africa are in Fear of Their Lives- Again,” Quarts Africa (2017), retrieved on February 22, 2017 from


[17] Stephanie Ott, “Heroes of the Anti-Apartheid Movement,” CNN (2013), retrieved February 20, 2017 from


[18] Greg Myre, “20 Years After Apartheid, South Africa Asks, How are We Doing?,” Northwest Public Radio (2014), retrieved February 20, 2017 from




Figure 1. The Division Council of The Cape designating a White Area only,


Figure 2. A protest poster portraying the potential to overcome an Apartheid Parliament.


Figure 3. A news article about Nelson Mandela’s death and his contributions to South Africa. Los Angeles Times, 2013.