Who Really Rules America Dominates Politics Political World System Complete Series

7 months ago
5.42K

Who Is Really Rules All America Money And Dominates Politics Political World Wide System Complete Series Who Ruling America Now ? We Our Takes a close comprehensive look into the governing system of the United States of America and reveals the behind-the-scene powers that rule the nation. There are two Americas; one in which people have freedom in choosing their leaders within the framework of the constitution living in the land of the free, and another, wherein all is dedicated to the ruling 1% and within which a hidden network of power governs including the media, Wall Street , the Military and Corporations.

President Dwight D. Eisenhower famously warned the public of the nation’s increasingly powerful military-industrial complex and the threat it posed to American democracy. Today, the United States routinely outspends every other country for military and defense expenditures.

We live in the United States of Surveillance – with cameras positioned on every street corner and much more invisible spying online and on the phone. Anyone paying attention knows that privacy is dead. All of this is not happening by accident – well-funded powerful agencies and companies are engaged in the business of keeping tabs on what we do, what we say, and what we think.

To many in the world today, the face of America also has a big nose for sniffing and sifting mountains of data – phone calls, emails, and texts. And with many mouths silenced by paranoia to keep what they decide is secret, secret. America has become a Surveillance-Industrial State where everyone’s business has become its business, and where one huge US intelligence Agency has been given the sanction and unlimited amounts of money to spy on the whole world.

The Military Industrial Complex was a phrase used by outgoing President Dwight Eisenhower when warning of a close relationship between the government and its defense industry.

Military-Industrial Complex is an unofficial phrase used to signify the rather 'comfortable' relationship that can develop between government entities (namely defense) and defense-minded manufacturers/organizations. This union can produce obvious benefits for both sides - warplanners receiving the tools necessary for waging war (while also furthering political interests abroad) while defense companies become the recipients of lucrative multi-million or multi-billion dollar deals.

'War for profit' is not an exclusive approach for modern times as it drove the best and worst of old Europe for many decades - perhaps best exemplified by the naval arms race seen between France, Spain and Britain. The driving force behind these initiatives was generally in out-doing a potential foe and, therefore, forcing the establishment of a large standing military force to counter the moves of the potential enemy. The modern interpretation of this, as it relates to the Military-Industrial Complex, is only slightly altered in that the established military force is now utilized to further global interests - the enemy is no longer another nation per se but any organization not in line with presented ideals.

At any rate, the theory of a mutually beneficial relationship existing between warplanners and industry is not unfounded for there is much money to be made in the design and development process of military goods which precede lucrative production commitments. As such, a defense contractor can be the recipient of multiple contracts during the lifespan of a single product leading many of the top firms to find ways to consistently outdo competitors in an attempt to maintain their own respective bottom lines in the boardroom and appease shareholders.

The phrase Military-Industrial Complex was first used in an American report at the turn of the 20th Century and later immortalized by outgoing United States President Dwight D. Eisenhower during his January 17, 1961 farewell address to the nation. In his speech, Eisenhower cited the 'Military-Industrial Complex' as a grave warning to the American people based on his experiences of an unlimited wartime economy coupled with a political environment as witnessed during, and after, World War 2 (1939-1945) - the warning being to not let the military-industrial establishment dictate America's actions at home or abroad for such unchecked power would begin to usurp the inherent freedoms found in the very fabric of our nation. The original usage appeared as Military-Industrial Congressional Complex but this was later - rather ironically - revised to exclude its reference to the American Congress.

Since October of 2006, MilitaryIndustrialComplex.com has existed to keep a running tally of those American defense contracts (at least those publicly revealed by the United States Department of Defense) in an attempt to keep an accurate value of defense expenditures for the interested reader. Despite the rather apparent transparency, the listed contracts do not necessarily represent the entire breadth of U.S. defense spending for not all might be publicly announced/presented. Our database can, however, be used to present a basic outline and, perhaps, be utilized in predicting the next great American conflict or educating the average American in the direction of his/her government.

What is the Bilderberg Group and are its members really plotting the New World Order Annual meeting of American and European elite attracts huge amount of suspicion and paranoia but is it really just an 'occasional supper club'?

As more Americans plow retirement savings into passive funds, the buy side has overtaken the sell side. Buoyed by an index fund collection called iShares that it purchased from Barclays, BlackRock is the world's largest asset manager, with $6.3 trillion of other people’s money under its control. BlackRock’s Aladdin risk-management system, a software tool that can track and analyze trading.

Explore the powers that run the United States, the ruling 1% network of America. This documentary miniseries ties connections between corporate entities, the media, and the government, and how they work to govern society today.

00:00 The Debate Over Power
Explore the complex nature of power in American society today.

00:24:08 The History of Democracy in America
This episode turns back the clock to examine the evolution of democracy in the United States.

00:48:24 The Corporate Takeover
Examine the relationship between powerful corporate entities and power in America in this episode.

01:12:19 The Power of The Media
The media's role in the American power hierarchy is explored in this episode.

01:36:32 Money Dominates Politics
The position of money in American politics is the subject of this episode.

02:00:39 The Power of Wall Street
In the final episode, take a look at how Wall Street institutions factor into America's power structure.

How And Why BlackRock and Vanguard Group and Fidelity and State Street Corp. and Military-Industrial Complex Who Our Really Ruling The World Now ?

The planet’s largest investment fund handles Mexico’s pension funds—and owns the companies they invest in. Cozy!

A new pecking order has emerged on Wall Street. Big banks remain powerful and incredibly profitable—quarterly income has hit record levels throughout 2018, largely due to benefits from the tax cuts. But a decade of financial crisis, regulatory pressures, and (most important) new investing trends has transferred power to a few dominant asset management firms. As more Americans plow retirement savings into passive funds, the buy side has overtaken the sell side.

Buoyed by an index fund collection called iShares that it purchased from Barclays, BlackRock is the world's largest asset manager, with $6.3 trillion of other people’s money under its control. BlackRock’s Aladdin risk-management system, a software tool that can track and analyze trading, monitors a whopping $18 trillion in assets for 200 financial firms; even the Federal Reserve and European central banks use it. This tremendous financial base has made BlackRock something of a Swiss Army knife—institutional investor, money manager, private equity firm, and global government partner rolled into one.

The BlackRock Transparency Project, an initiative from the Campaign for Accountability, a watchdog organization focused on public corruption, seeks to demystify the firm’s “access and influence” business model. BlackRock forges close relationships with governments to outpace competitors, attracting special benefits and avoiding onerous regulatory standards. Since 2004, researchers note, BlackRock has hired at least 84 former government officials, regulators, and central bankers worldwide. This can quickly bleed into conflicts of interest and official corruption.

For example, it's no secret that BlackRock

CEO Larry Fink built a shadow government of former agency officials in a bid to become Hillary Clinton’s Treasury secretary. That didn’t stop Fink from becoming part of the main private-sector advisory organization to Donald Trump until that panel disbanded after Charlottesville.

Links to leaders in both parties have enabled BlackRock to successfully fight designation as a systemically important financial institution, keeping its trillions outside the Dodd-Frank regulatory perimeter. The Treasury Department official leading efforts to relax that designation and keep asset managers outside its grip is Craig Phillips, a former BlackRock executive.

This model of fused BlackRock/government relations doesn't stop in the United States, as researchers at the BlackRock Transparency Project have laid out in a series of reports. The first focused on Canada's Infrastructure Bank, a public-private partnership for low-cost loans for road and bridge projects, which BlackRock advised on creating and helped staff with friendly executives. BlackRock subsequently stands to gain from the bank it helped construct.

The latest report, provided exclusively to the Prospect, details a deep tangle of relationships between BlackRock and the outgoing government of Enrique Peña Nieto in Mexico. This has bolstered BlackRock’s efforts to generate an infrastructure business in Mexico from scratch. Since 2012, BlackRock has purchased stakes in Mexican toll roads, hospitals, gas pipelines, prisons, oil exploration businesses, and a coal-fired power plant.

Alternative investments like infrastructure projects return higher yields than stocks or bonds. Operating fees are as much as triple those from fixed-income investments, making them lucrative for BlackRock as well. BlackRock’s 2013 annual report featured a section called “The Infrastructure Opportunity,” explaining how its prodigious funds—in particular, pension funds—could fill the gap governments needed to modernize and upgrade their public works.

To make an infrastructure play work, you need a willing government partner. When Peña Nieto took power in Mexico, nearly half of his expressed commitments involved using private capital for infrastructure, including $590 billion in public-private partnerships. BlackRock praised his boldness: “Mexico is an incredible growth story,” Fink said in 2013. “If I were 22 years old and I didn’t know what I wanted to do, I would move to Mexico right now because I think the opportunity is huge there,” he later added.

The opportunity was indeed huge, if you happened to be BlackRock. The firm benefited from the controversial opening of PEMEX, the state-run oil monopoly, to private investment. Within seven months, BlackRock had secured $1 billion in PEMEX energy projects. In June 2015, BlackRock acquired a scandal-ridden Mexican private equity firm called I Cuadrada for $71 million. A month later, Sierra Oil and Gas, a year-old portfolio company of I Cuadrada that had never drilled an oil well, won two major exploration contracts from PEMEX. Sierra was the only bidder.

In another suspicious deal, a contractor named Grupo Tradeco continually missed deadlines for building a private prison in Coahuila state, with accusations of 2.5 billion pesos in waste. But right before BlackRock bought the project, Peña Nieto increased the construction payments for the prison by 18 percent. A third deal involved BlackRock purchasing a contract to build a toll road between Toluca and Naucalpan. A month later, Peña Nieto signed an executive order to resolve a legal dispute over siting the road through what indigenous groups consider sacred land, expropriating 91 acres for the project.

Clearly, BlackRock benefited from its ties to Mexican officials and luminaries. The son of Carlos Slim, Mexico's richest man, is a BlackRock board member. Mexico’s former undersecretary of finance, Gerardo Rodriguez Regordosa, became a managing director in 2013. The CEO of BlackRock Mexico, Isaac Volin, was previously a national bank regulator, and in 2016 he became the general director of a PEMEX subsidiary. Peña Nieto himself met with Larry Fink prior to his election and numerous times afterward.

In addition,

BlackRock exploited changes in Mexican law allowing asset managers to take control of Mexican pension funds.

By placing hundreds of millions of dollars in pension money into its Mexican infrastructure business, BlackRock puts Mexico's state and local governments in an impossible position, says Josh Rosner, an adviser to the BlackRock Transparency Project and co-author of the report.

“If a BlackRock-owned infrastructure project becomes ‘a road to nowhere,' and the government wants to stop funding the project, BlackRock can put the official over a barrel and say, ‘You're putting a loss on pensioners,'” Rosner says. “This would force the public official to choose between a waste of public monies and the risk that they would suffer a political loss of voters.” Such an arrangement virtually guarantees conflicts of interest, and possible corruption, in these projects.

BlackRock's shopping spree in Mexico could be threatened by the July election of leftist Andrés Manuel López Obrador. AMLO, as he's often nicknamed, singled out the PEMEX deal with Sierra Oil and Gas, referring to BlackRock as “the white-collar financial mafia” on Facebook. AMLO’s handpicked energy minister, Rocío Nahle García, has called for the removal of Volin from PEMEX, amid what whe termed “marked favoritism” for companies like BlackRock.

Predictably, BlackRock reacted negatively to the AMLO victory, stating in a “geopolitical risk” report that “deterioration in Mexico’s economic policy” could ensue from it. But its position softened somewhat after a June meeting between AMLO and CEO Fink. So has AMLO’s. He initially promised to reverse all of Peña Nieto’s energy reforms, but now has said he’d merely review PEMEX contracts. And in meetings with BlackRock and dozens of investment funds, AMLO’s top adviser said, “We are really not leftist, we are center-left,” while vowing to stay the course on free trade, central bank independence, and a floating currency.

It seems AMLO has understood what Clinton adviser James Carville learned at the outset of his boss’s presidency: “I used to think if there was reincarnation, I wanted to come back as the president or the pope or a .400 baseball hitter. But now I want to come back as the bond market. You can intimidate everybody.” In Mexico and around the world, a large share of that financier clout is wielded by BlackRock. Such power and influence, often at odds with the public good and combined with potential hazards for the overall financial system, demands additional scrutiny.

With $20 trillion between them, Blackrock and Vanguard could own almost everything by 2028

Imagine a world in which two asset managers call the shots, in which their wealth exceeds current U.S. GDP and where almost every hedge fund, government and retiree is a customer.

It’s closer than you think. BlackRock Inc. and Vanguard Group — already the world’s largest money managers — are less than a decade from managing a total of US$20 trillion, according to Bloomberg News calculations. Amassing that sum will likely upend the asset management industry, intensify their ownership of the largest U.S. companies and test the twin pillars of market efficiency and corporate governance.

None other than Vanguard founder Jack Bogle, widely regarded as the father of the index fund, is raising the prospect that too much money is in too few hands, with BlackRock, Vanguard and State Street Corp. together owning significant stakes in the biggest U.S. companies.

“That’s about 20 per cent owned by this oligopoly of three,” Bogle said at a Nov. 28 appearance at the Council on Foreign Relations in New York. “It is too bad that there aren’t more people in the index-fund business.”

Vanguard is poised to parlay its US$4.7 trillion of assets into more than US$10 trillion by 2023, while BlackRock may hit that mark two years later, up from almost US$6 trillion today, according to Bloomberg News projections based on the companies’ most recent five-year average annual growth rates in assets. Those gains in part reflect a bull market in stocks that’s driven assets into investment products and may not continue.

Investors from individuals to large institutions such as pension and hedge funds have flocked to this duo, won over in part by their low-cost funds and breadth of offerings. The proliferation of exchange-traded funds is also supercharging these firms and will likely continue to do so.

Global ETF assets could explode to US$25 trillion by 2025, according to estimates by Jim Ross, chairman of State Street’s global ETF business. That sum alone would mean trillions of dollars more for BlackRock and Vanguard, based on their current market share.

“Growth is not a goal, nor do we make projections about future growth,” Vanguard spokesman John Woerth said of the Bloomberg calculations.

While bigger may be better for the fund giants, passive funds may be blurring the inherent value of securities, implied in a company’s earnings or cash flow.

The argument goes like this: The number of indexes now outstrips U.S. stocks, with the eruption of passive funds driving demand for securities within these benchmarks, rather than for the broader universe of stocks and bonds. That could inflate or depress the price of these securities versus similar un-indexed assets, which may create bubbles and volatile price movements.

Stocks with outsize exposure to indexed funds could trade more on cross-asset flows and macro views, according to Goldman Sachs Group Inc. The bank found that, for the average stock in the S&P 500, 77 per cent might trade on fundamentals, versus more than 90 per cent a decade ago.

That’s not BlackRock’s experience. “While index investing does play a role, the price discovery process is still dominated by active stock selectors,” executives led by Vice Chairman Barbara Novick wrote in a paper in October, citing the relatively low turnover and small size of passive accounts compared with active strategies.

Another concern is that without the prospect of being part of an index, fewer small or mid-sized companies have an incentive to go public, according to Larry Tabb, founder of Tabb Group LLC, a New York-based firm that analyzes the structure of financial markets. That’s because their stock risks underperforming without the inclusion in an index or an ETF, he said. Benchmarks are governed by rules or a methodology for selection and some require that a security has a certain size or liquidity for inclusion.

We’re not near a tipping point yet. Roughly 37 per cent of assets in U.S.-domiciled equity funds are managed passively, up from 19 per cent in 2009, according to Savita Subramanian at Bank of America Corp. By contrast, in Japan, nearly 70 per cent of domestically focused equity funds are passively managed, suggesting the U.S. can stomach more indexing before market efficiency suffers.

There’s even further to go if you look globally: Only 15 per cent of world equity markets — including funds, separately managed accounts and holdings of individual securities — are passively managed, said Joe Brennan, global head of Vanguard’s equity index group, in an interview.

BlackRock and Vanguard’s dominance raises questions about competition and governance. The companies hold more than 5 per cent of more than 4,400 stocks around the world, research from the University of Amsterdam shows.

That’s making regulators uneasy, with SEC Commissioner Kara Stein asking in February: “Does ownership concentration affect the willingness of companies to compete?” Common ownership by institutional shareholders pushed up airfares by as much as 7 per cent over 14 years starting in 2001 because the shared holdings put less pressure on the airlines to compete, according to a study led by Jose Azar, an assistant professor of economics at IESE Business School. BlackRock and Vanguard are among the five largest shareholders of the three biggest operators.

“As BlackRock and Vanguard grow, and as money flows from active to large passive investors, their per centage share of every firm increases,” said Azar in an interview. “If they cross the 10 per cent threshold, I think for many people that would make it clearer that the growth of large asset managers could create serious concerns for competition in many industries.”

BlackRock has called Azar’s research “vague and implausible” while other academics have questioned his methodology. One of those is Edward Rock. A law professor at New York University, Rock says a variety of legal rules in fact discourage stakes above 10 per cent and he favors creating a safe harbor for holdings up to 15 per cent to incentivize shareholder engagement. The firms are among the biggest holders of some of the world’s largest companies across a range of industries including Google parent Alphabet Inc.and Facebook Inc. in technology, and lenders like Wells Fargo & Co. In the U.S., both companies supported or didn’t oppose 96 per cent of management resolutions on board directors in the year ended June 30, according to their own reports.

“We’ve put more and more efforts behind it but we’ve always had a substantial effort,” said Vanguard’s Brennan. “We’re permanent long-term holders and, given that, we have the strongest interest in the best outcomes.” Their size could also help companies change for the good. Both firms were among the first to join the Investor Stewardship Group, a group of institutional asset managers seeking to foster better corporate governance, according to the organization’s website. Vanguard has doubled its team dedicated to this over the last two years and supported two climate-related shareholder resolutions for the first time. BlackRock has more than 30 people engaging with its portfolio companies. Active managers will be watching these developments closely. While many concede that stemming the passive tide is a challenge, they may see better days as central banks start unwinding a decade of easy monetary policy that’s sapped volatility.

Data show performance among active managers is improving. Some 57 per cent of large-cap stock pickers underperformed the S&P 500 in the year ended June 30, compared with 85 per cent the year before, data from S&P Dow Jones Indices show. And if indexing distorts the market so much that it’s easier to beat, more investors will flock to stock pickers, says Richard Thaler, Nobel laureate, University of Chicago professor and principal at Fuller & Thaler Asset Management.

Right now, though, the duo’s advance appears unstoppable, and the benefits they’ve brought with low-cost investments may outweigh some of the structural issues.

“Given that they’ve grown so big because their fees are so small, these are the kinds of monopolies that don’t keep me up at night,” said Thaler.

Rank Firm/company Country AUM (billion USD)
1 United States BlackRock United States 9,090
2 United States Vanguard Group United States 7,600
3 United States Fidelity Investments United States 4,240
4 United States State Street Global Advisors United States 3,600
5 United States Morgan Stanley United States 3,131
6 United States JPMorgan Chase United States 3,006
7 United States Goldman Sachs United States 2,672

Bilderberg Group and are its members really plotting the New World Order ?
Annual meeting of American and European elite attracts huge amount of suspicion and paranoia but is it really just an 'occasional supper club' ?

The secretive Bilderberg Group gathers for its annual meeting this week, which is taking place in Montreux, Switzerland.

A collective of elite North American and European politicians, business leaders, financiers and academics, the group has attracted a good deal of suspicion over the last half-century, with conspiracy theorists confidently asserting that its members are plotting the New World Order and are hell-bent on global domination.

Protesters who believe the Bilderbergers represent a “shadow world government” regularly picket their yearly meet-ups, creating a need for high security at all times, but attendees insist the group is simply a debating society taking place outside the glare of the political spotlight.

The group publishes its guest list the day before its annual get together – between 120 and 150 are invited by its steering committee – along with a list of the subjects they intend to discuss as a gesture towards transparency. This typically consists of broad issues like macroeconomic concerns, the threat of terrorism and cyber-security.

No minutes are taken, however, and the outcome of their discussions are not made public, hence the assumption that they are a sinister cabal of the rich and powerful with something to hide.

The Bilderberg Group take their name from the Hotel de Bilderberg in Oosterbeek, the Netherlands, where its members first convened on 29 May 1954 at the invitation of Prince Bernhard of Lippe-Biesterfeld.

Its founders – including exiled Polish politician Jozef Retinger, ex-Belgian prime minister Paul van Zeeland and Paul Rijkens, former head of consumer goods giant Unilever – were concerned about a prevailing atmosphere of anti-American sentiment in post-war Europe in a moment when the US was enjoying a consumer boom while holding the fate of the recovering continent in its hands through the Marshall Plan.

The group hoped to revive a spirit of transatlantic brotherhood based on political, economic and military cooperation, necessary during the Cold War as the USSR tightened its iron grip on its eastern satellites.

Sixty-one delegates, including 11 Americans, from a total of 12 countries attended the inaugural conference, with candidates chosen to bring complimentary conservative and liberal points of view, future Labour leader Hugh Gaitskell among them. Its success meant subsequent meetings were held in France, Germany, Denmark before the first on American soil at St Simons Island in Georgia.

The Bilderberg Group’s primary goal has reportedly been expanded to take in a more all-encompassing endorsement of Western free market capitalism over the years, although the conspiracy theorists believe their agenda is either to impose pan-global fascism or totalitarian Marxism. They’re just not sure which.

Although members do not as a rule discuss what goes on within its conferences, Labour MP and onetime party deputy leader Denis Healey, a member of the steering committee for more than 30 years, did offer a clear statement of its intentions when quizzed by journalist Jon Ronson for his book Them in 2001.

“To say we were striving for a one-world government is exaggerated, but not wholly unfair,” he said. ”Those of us in Bilderberg felt we couldn’t go on forever fighting one another for nothing and killing people and rendering millions homeless. So we felt that a single community throughout the world would be a good thing.”

Other notable British politicians to have accepted the group’s invitation include Conservatives Alec Douglas-Home and Peter Carrington – who chaired the committee between 1977 and 1980 and between 1990 and 1998 respectively – and Margaret Thatcher, David Owen, Tony Blair, Peter Mandelson, Ed Balls, Ken Clarke and George Osborne. Princes Philip and Charles have also been.

Henry Kissinger is a regular, while Helmut Kohl, Bill Clinton, Bill Gates, Christine Lagarde and Jose Manuel Barroso have all attended among the billionaires and executives from leading banks, corporations and defence industry bigwigs. Perhaps most surprisingly, Ryanair’s Michael O’Leary attended 2015’s event in Telfs-Buchen in the Austrian Tyrol.

Rather than a SPECTRE-like organisation reinforcing its interests by choosing presidents and controlling public opinion through the media, the Bilderberg Group is nothing more sinister than “an occasional supper club”, according to David Aaronovitch, author of Voodoo Histories (2009).

But even if the Bilderberg Group are not David Icke’s slavering lizard men in silk hoods, the idea that they might be grouped in with the Illuminati has provided a convenient cloaking device, says journalist Hannah Borno.

“Conspiracy theories have served the group quite well, because any serious scrutiny could be dismissed as hysterical and shrill,” she said. ”But look at the participant list. These people have cleared days from their extremely busy schedules.”

American alt-right “shock jock” Alex Jones has been one of the loudest proponents of such theories, stating on air: “We know you are ruthless. We know you are evil. We respect your dark power”.

He appeared on Andrew Neil’s Sunday Politics show in 2013 to discuss the Bilderberg Group’s meeting at a hotel in Watford, ranting wildly about them as “puppeteers above the major parties” and insisting on their role in the founding of the EU. ”A Nazi plan”, according to Mr Jones.

He has more recently attended protest camps, sent InfoWars pundit Owen Shroyer to try and invade their 2017 gathering in Chantilly, Virginia, and accused them of plotting to overthrow US president Donald Trump.

That might all sound alarming but is fairly mild by Mr Jones’s standards. He also believes Barack Obama and Hillary Clinton are demons and that the Pentagon has a secret “gay bomb”.

Will AI The Take Over The World and Rules that US policymakers are considering ?
AI is getting seriously good. And the federal government is finally getting serious about AI.

The White House announced a suite of artificial intelligence policies in May. More recently, they brokered a number of voluntary safety commitments from leading AI companies in July. That included commitments to both internal and third-party testing of AI products to ensure they’re secure against cyberattack and guard against misuse by bad actors.

Senate Majority Leader Chuck Schumer outlined his preferred approach to regulation in a June speech and promised prompt legislation, telling his audience, “many of you have spent months calling on us to act. I hear you loud and clear.” Independent regulators like the Federal Trade Commission have been going public to outline how they plan to approach the technology. A bipartisan group wants to ban the use of AI to make nuclear launch decisions, at the very minimum.

But “knowing you’re going to do something” and “knowing what that something is” are two different things. AI policy is still pretty virgin terrain in DC, and proposals from government leaders tend to be articulated with lots of jargon, usually involving invocations of broad ideas or requests for public input and additional study, rather than specific plans for action. Principles, rather than programming. Indeed, the US government’s record to date on AI has mostly involved vague calls for “continued United States leadership in artificial intelligence research and development” or “adoption of artificial intelligence technologies in the Federal Government,” which is fine, but not exactly concrete policy.

That said, we probably are going to see more specific action soon given the unprecedented degree of public attention and number of congressional hearings devoted to AI. AI companies themselves are actively working on self-regulation in the hope of setting the tone for regulation by others. That — plus the sheer importance of an emerging technology like AI — makes it worth digging a little deeper into what action in DC might involve.

You can break most of the ideas circulating into one of four rough categories:

Rules: New regulations and laws for individuals and companies training AI models, building or selling chips used for AI training, and/or using AI models in their business
Institutions: New government agencies or international organizations that can implement and enforce these new regulations and laws
Money: Additional funding for research, either to expand AI capabilities or to ensure safety
People: Expanded high-skilled immigration and increased education funding to build out a workforce that can build and control AI
New rules
Making new rules for AI developers — whether in the form of voluntary standards, binding regulations from existing agencies, new laws passed by Congress, or international agreements binding several countries — is by far the most crowded space here, the most consequential, and the most contested.

On one end of the spectrum are techno libertarians who look warily on attempts by the government to mandate rules for AI, fearing that this could slow down progress or, worse, lead to regulatory capture where rules are written to benefit a small handful of currently dominant companies like OpenAI. The Electronic Frontier Foundation and the R Street Institute are probably the leading representatives of this perspective in DC.

Other stakeholders, though, want extensive new rulemaking and legislating on a variety of AI topics. Some, like Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT), want sweeping changes to rules around liability, enabling citizens to sue AI companies or prosecutors to indict them if their products cause certain harms.

One category of proposals deals with how AI systems interface with existing rules around copyright, privacy, and bias based on race, gender, sexual orientation, and disability. Think of it more like AI ethics rather than AI safety.

Copyright: The US Copyright Office has issued rulings suggesting that most texts, images, and videos output by AI systems cannot be copyrighted as original works, as they were not created by a human. Meanwhile, large models like GPT-4 and Stable Diffusion rely on massive training datasets that usually include copyrighted texts and images. This has prompted myriad lawsuits and provisions in the European Union’s AI Act requiring model builders to “publish information on the use of training data protected under copyright law.” More regulations and laws from either US agencies or Congress could be forthcoming.

Privacy: Just as large AI companies have faced lawsuits for copyright violations in the construction of their models, so too have some plaintiffs argued that the mass web scraping necessary to collect the terabytes of data needed to train the models represents an invasion of privacy. The revelation in March of a since-patched data vulnerability that allowed ChatGPT users to access other users’ chat histories, and even their payment information, raised further alarms. Italy even briefly banned the service over privacy concerns about the training data. (It’s since been allowed back.) Policymakers have been focusing on similar issues in social media and online advertising for some time now, with common proposals including a full ban on using personal data to target ads, and FTC action to require “data minimization” in which websites can only collect data relevant to a narrow function of the site.

Algorithmic bias: In part because they draw upon datasets that inevitably reflect stereotypes and biases in humans’ writing, legal decisions, photography, and more, AI systems have often exhibited biases with the potential to harm women, people of color, and other marginalized groups. The main congressional proposal on this topic is the Algorithmic Accountability Act, which would require companies to evaluate algorithmic systems they use — in other words, AI — for “bias, effectiveness and other factors,” and enlist the Federal Trade Commission to enforce the requirement. The FTC has said it will crack down using existing authority to prevent “the sale or use of — for example — racially biased algorithms”; what these enforcement actions might look like in practice is as yet unclear.

Another set of proposals views AI through a national security or extreme risk perspective, trying to prevent either more powerful rogue AIs that could elude human control or the misuse of AI systems by terrorist groups or hostile nation-states (particularly China) to develop weapons and other harmful products. A future rogue AI with sufficiently high capabilities, that humans cannot shut down or coerce into following a safe goal, would pose a high risk of harming humans, even if such harm is merely incidental to its ultimate goal. More immediately, sufficiently powerful AI models could gain superhuman abilities in hacking, enabling malign users to access sensitive data or even military equipment; they could also be employed to design and deploy pathogens more dangerous than anything nature has yet cooked up.

Mandatory auditing, with fines against violators: As with racial or gender bias, many proposals to deal with uncontrollable AIs or extreme misuse focus on evaluations and “red-teaming” (attempts to get models to exhibit dangerous behavior, with the aim of discovering weaknesses or flaws in the models) which could identify worrisome capabilities or behaviors by frontier AI models. A recent paper by 24 AI governance experts (including co-authors from leading firms like Google DeepMind and OpenAI) argued that regulators should conduct risk assessments before release, specifically asking “1) which dangerous capabilities does or could the model possess, if any?, and (2) how controllable is the model?”

The authors call for AI firms to apply these risk assessments to themselves, with audits and red-teaming from third-party entities (like government regulators) to ensure the firms are following protocol. Regulators should be given regular access to documentation on how the models were trained and fine-tuned; in extreme cases, “significant administrative fines or civil penalties” from regulators for failing to follow best practices could be necessary. In less severe cases, regulators could “name and shame” violators.

In the nearer term, some in Congress, like Sens. Ted Budd (R-NC) and Ed Markey (D-MA), are pushing legislation to require the Department of Health and Human Services to conduct risk assessments of the biological dangers posed by AI and develop a strategy for preventing its use for bioweapons or artificial pandemics. These are fairly light requirements but might serve as a first step toward more binding regulation. Many biosecurity experts are worried that AIs capable of guiding amateurs through the process of creating deadly bioweapons will emerge soon, making this particular area very high-stakes.

Licensing requirements: The attorney Andrew Tutt in 2017 proposed a more far-reaching approach than simply mandating risk evaluations, one instead modeled on tougher US regulations of food and pharmaceuticals. The Food and Drug Administration generally does not allow drugs on the market that have not been tested for safety and effectiveness. That has largely not been the case for software — no governmental safety testing is done, for example, before a new social media platform is introduced. In Tutt’s vision, a similar agency could “require pre-market approval before algorithms can be deployed” in certain applications; “for example, a self-driving car algorithm could be required to replicate the safety-per-mile of a typical vehicle driven in 2012.”

This would effectively require certain algorithms to receive a government “license” before they can be publicly released. The idea of licensing for AI has taken off in recent months, with support from some in industry. OpenAI CEO Sam Altman called for “licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities” in testimony before Congress. Jason Matheny, CEO of the Rand Corporation and a former senior Biden adviser, told the Senate, “we need a licensing regime, a governance system of guardrails around the models that are being built.” Gary Marcus, an NYU professor and prominent voice on AI, urged Congress to specifically follow the FDA model as it ponders regulating AI, requiring pre-approval before deployment.

“Compute” regulation: Training advanced AI models requires a lot of computing, including actual math conducted by graphics processing units (GPUs) or other more specialized chips to train and fine-tune neural networks. Cut off access to advanced chips or large orders of ordinary chips and you slow AI progress. Harvard computer scientist Yonadav Shavit has proposed one model for regulating compute. Shavit’s approach would place firmware (low-level code embedded into hardware) that can save “snapshots” of the neural networks being trained, so inspectors can examine those snapshots later, and would require AI companies to save information about their training runs so they can verify their activities match the information on the chip firmware. He would also have regulators monitor chip orders to ensure no one is purchasing a critical mass of unmonitored chips not subject to these regulations, just as biorisk experts have advocated monitoring gene synthesis orders to prevent the deliberate engineering of dangerous pathogens.

Export controls, like those the US placed restricting the sale of advanced chips to China, could also count as a form of compute regulation meant to limit certain nations or firms’ ability to train advanced models.

New institutions for a new time
Implementing all of the above regulations requires government institutions with substantial staff and funding. Some of the work could be done, and already is being done, by existing agencies. The Federal Trade Commission has been aggressive, especially on privacy and bias issues, and the National Institute of Standards and Technology, a scientific agency that develops voluntary standards for a number of fields, has begun work on developing best practices for AI development and deployment that could function either as voluntary guidelines or the basis of future mandatory regulations.

But the scale of the challenge AI poses has also led to proposals to add entirely new agencies at the national and international level.

The National Artificial Intelligence Research Resource: One new federal institution dedicated to AI is already in the works. A 2020 law mandated the creation of a task force to design a National Artificial Intelligence Research Resource (NAIRR), a federally funded group that would provide compute, data, and other services for universities, government researchers, and others who currently lack the ability to do cutting-edge work. The final report by the task force asked for $2.6 billion over six years, though Congress has not shown much interest in allocating that funding as of yet. A bipartisan group in the House and Senate recently introduced legislation that would formally establish NAIRR and instruct the National Science Foundation to fund it.

In contrast to technologies like nuclear power and even the early internet, AI is dominated by the private sector, not the government or universities. Companies like OpenAI, Google DeepMind, and Anthropic spend hundreds of millions of dollars on the server processing, datasets, and other raw materials necessary to build advanced models; the goal of NAIRR is to level the playing field somewhat, “democratizing access to the cyberinfrastructure that fuels AI research and development,” in the words of the National Science Foundation’s director.

Dedicated regulator for AI: While agencies like the FTC, the National Institute of Standards and Technology, and the Copyright Office are already working on standards and regulations for AI, some stakeholders and experts have argued the topic requires a new, dedicated regulator that can focus more specifically on AI without juggling it and other issues.

In their testimony before Congress, OpenAI CEO Sam Altman and NYU professor Gary Marcus both endorsed creating a new agency. Brad Smith, president of Microsoft, has echoed his business partner Altman and argued that new AI regulations are “best implemented by a new government agency.” Computer scientist Ben Schneiderman has suggested a National Algorithms Safety Board, modeled on the National Transportation Safety Board that investigates all airplane crashes, some highway crashes, and other transportation safety disasters.

Sens. Michael Bennet (D-CO) and Peter Welch (D-VT) have introduced legislation that would act on these suggestions and create a Federal Digital Platform Commission, charged with regulating AI and social media.

But others have pushed back and argued existing agencies are sufficient. Kent Walker, the top policy official at Google, had suggested that the National Institute of Standards and Technology (NIST) should be the main agency handling AI. (Notably, NIST does not have any regulatory powers and cannot compel tech companies to do anything.) Christina Montgomery, a top executive at IBM, similarly told Congress that taking time to set up a new agency risks “slow[ing] down regulation to address real risks right now.”

CERN for AI: Rishi Sunak, the UK prime minister, pitched President Joe Biden on setting up a “CERN for AI,” modeled after the Conseil européen pour la recherche nucléaire (CERN) in Geneva, which hosts large-scale particle accelerators for physics research and was the birthplace of the World Wide Web. Advocates like the computer scientist Holger Hoos argue that setting up such a facility would create “a beacon that is really big and bright,” attracting talent from all over the world to collaborate on AI in one location not controlled by a private company, making the exchange of ideas easier. (The sky-high salaries being offered to AI experts by those private companies, however, might limit its appeal unless this institution could match them.)

A recent paper from a team of AI governance experts at Google DeepMind, OpenAI, several universities, and elsewhere proposed specifically setting up a CERN-like project for AI safety. “Researchers—including those who would not otherwise be working on AI safety—could be drawn by its international stature and enabled by the project’s exceptional compute, engineers and model access,” the authors write. “The Project would become a vibrant research community that benefits from tighter information flows and a collective focus on AI safety.”

IAEA for AI: Top OpenAI executives Sam Altman, Greg Brockman, and Ilya Sutskever said in May that the world will “eventually need something like an IAEA for superintelligence efforts,” a reference to the International Atomic Energy Agency in Vienna, the UN institution charged with controlling nuclear weapons proliferation and governing the safe deployment of nuclear power. UN Secretary-General António Guterres has echoed the call.

Others have pushed back on the IAEA analogy, noting that the IAEA itself has failed to prevent nuclear proliferation to France, China, Israel, India, South Africa, Pakistan, and North Korea, all of which developed their bombs after the IAEA’s inception in 1957. (South Africa voluntarily destroyed its bombs as part of the transition from apartheid.) Others have noted that the IAEA’s focus on monitoring physical materials like uranium and plutonium lacks a clear analogy to AI; while physical chips are necessary, they’re much harder to track than the rare radioactive material used for nuclear bombs, at least without the controls Yonadav Shavit has proposed.

In the same paper discussing a CERN-like institution for AI, the authors considered a model for an Advanced AI Governance Organization that can promote standards for countries to adopt on AI and monitor compliance with those standards, and a Frontier AI Collaborative that could function a bit like the US National Artificial Intelligence Research Resource on an international scale and spread access to AI tech to less affluent countries. Rather than copying the IAEA directly, the aim would be to identify some specific activities that a multilateral organization could engage in on AI and build a team around them.

New funding for AI
Implementing new regulations and creating new institutions to deal with AI will, of course, require some funding from Congress. Beyond the tasks described above, AI policy experts have been proposing new funding specifically for AI capabilities and safety research by federal labs (which would have different and less commercially driven priorities than private companies), and for the development of voluntary standards for private actors to follow on the topic.

More funding for the Department of Energy: To date, much federal investment in AI has focused on military applications; the Biden administration’s latest budget request includes $1.8 billion in defense spending on AI for the next year alone. The recent House and Senate defense spending bills feature numerous AI-specific provisions. But AI is a general-purpose technology with broad applications outside of warfare, and a growing number of AI policy experts are suggesting that the Department of Energy (DOE), rather than the Pentagon, is the proper home for non-defense AI research spending.

The DOE runs the national laboratories system, employing tens of thousands of people, and through those labs it already invests considerable sums into AI research. “[DOE] has profound expertise in artificial intelligence and high-performance computing, as well as established work regulating industries and establishing standards,” Divyansh Kaushik of the Federation of American Scientists has written. “It also has experience addressing intricate dual-use technology implications and capability as a grant-making research agency.” These make it “best-suited” to lead AI research efforts.

On the Christopher Nolan end of the scale, the Foundation for American Innovation’s Sam Hammond has suggested that a “Manhattan Project for AI Safety” be housed in the Department of Energy. The project would facilitate coordination between private-sector actors and the government on safety measures, and create new computing facilities including ones that are “air gapped,” deliberately not connected to the broader internet, “ensuring that future, more powerful AIs are unable to escape onto the open internet.”

More funding for the National Science Foundation: Another place in government that has already been funding research on AI is the National Science Foundation, the feds’ main scientific grantmaker outside of the medical sciences.

The Federation of American Scientists’ Matt Korda and Divyansh Kaushik have argued that beyond additional funding, the agency needs to undergo a “strategic shift” in how it spends, moving away from enhancing the capabilities of AI models and toward “safety-related initiatives that may lead to more sustainable innovations and fewer unintended consequences.”

More funding for the National Institute of Standards and Technology: NIST is not exactly the most famous government agency there is, but its unique role as a generator of voluntary standards and best practices to government and industry, without any regulatory function, makes it an important actor at this moment in AI history. The field is in enough flux that agreement on what standards should be binding is limited.

In the meantime, NIST has released an AI Risk Management Framework offering initial standards and best practices for the sector. It has also created a Trustworthy & Responsible Artificial Intelligence Resource Center, designed to provide training and documents to help industry, government, and academia abide by the Risk Management Framework. Some in Congress want to mandate federal agencies abide by the framework, which would go a long way toward adoption.

The AI firm Anthropic, which has made safety a priority, has proposed a $15 million annual increase in funding for NIST, funding 22 additional staffers, to double staffing working on AI and build bigger “testing environments” where it can experiment on AI systems and develop techniques to measure their capabilities and possibly dangerous behaviors.

New people to take the lead on AI research
A recent survey of AI researchers from the Center for Security and Emerging Technology (CSET), a leading think tank on AI issues, concluded that processors and “compute” are not the main bottleneck limiting progress on AI. The bottleneck for making intelligent machines is intelligent humans; building advanced models requires highly trained scientists and engineers, all of whom are currently in short supply relative to the extraordinary demand for their talents.

That has led many AI experts to argue that US policy has to focus on growing the number of trained AI professionals, through both expanded immigration and by providing more scholarships for US-born aspiring researchers.

Expanded high-skilled immigration: In 2021, the National Security Commission on Artificial Intelligence, a group charged by Congress with developing recommendations for federal AI policy, argued that Congress should dramatically expand visas and green cards for workers and students in AI. These are incredibly common proposals in AI circles, for clear reasons. The US immigration system has created major barriers for AI researchers seeking to come here. One survey found that 69 percent of AI researchers in the US said that visa and immigration issues were a serious problem for them, compared to 44 percent and just 29 percent in the UK and Canada, respectively.

Those countries are now using these difficulties to aggressively recruit STEM professionals rejected from the US. The analyst Remco Zwetsloot has concluded that these dynamics are creating “a consensus among U.S. technology and national security leaders that STEM immigration reform is now ... ‘a national security imperative.’”

Funding for AI education programs: Similarly, some policymakers have proposed expanding subsidies for students to gain training in machine learning and other disciplines (like cybersecurity and processor design) relevant to advanced AI. Sens. Gary Peters (D-MI) and John Thune (R-SD) have proposed the AI Scholarship-for-Service Act, which would provide undergraduate and graduate scholarships to students who commit to working in the public sector after graduation.

The ground is still shifting
These four areas — regulation, institutions, money, and people — make up the bulk of the AI policy conversation right now. But I would be remiss if I did not note that this conversation is evolving quite rapidly. If you told me in January, barely a month after the release of ChatGPT, that CEOs of OpenAI and Anthropic would be testifying before Congress and that members would be taking their ideas, and those of unaffiliated AI risk experts, seriously, I would have been shocked. But that’s the territory we’re in now.

The terrain is shifting fast enough that we could be in an entirely different place in a few months, with entirely different leading actors. Maybe the AI labs lose influence; maybe certain civil society groups gain it; maybe the military becomes a bigger component of these talks.

All that makes now a particularly sensitive moment for the future of AI. There’s an idea in tech policy called the Collingridge dilemma: When a technology is novel, it’s easier to change its direction or regulate it, but it’s also much harder to know what the effect of the technology will be. Once the effect of the technology is known, that effect becomes harder to change.

We’re in the “unknown impact, easier to influence direction” stage on AI. This isn’t an area of intractable gridlock in DC, at least not yet. But it’s also an area where the actual technology feels slippery, and everything we think we know about it feels open to revision.

Rules for the New World Order: A Citizen’s Proposal We who were born to democracy value it, but human beings are not all alike. Some may prefer to live under dictatorship. Anyway, whether other people like their leaders is no business of ours. As long as other governments stay within their own borders, they are not our problem.

Those were the dominant ideas of the old world order: cultural relativism and national sovereignty.

Now there is hardly a nation in which people have not let it be known that they want to choose their leaders. Democracy is not a value limited to certain kinds of people, it is shared, it is human. We should have known that. We should also have known the other lesson the world is now demonstrating, that when people lose the ability to choose their government, when they are treated brutally by their government, or when their government collapses, they need and want help from beyond their borders.

The new world order is trying to form around the principles of self-determination and international concern for the workability and decency of all governments. These principles are not yet operational; at least they don’t yet add up to an “order.” Every new crisis — Kuwait, Somalia, Bosnia, Russia — is treated differently. We are making this world order up as we go along, and we are making mistakes.

Mistakes are understandable during a period of learning. But are we learning? The least we owe the ravaged Bosnians is some sense that suffering as terrible as theirs will never be permitted again, anywhere. The least we owe their children and our own is a world order based on rules more humane than: “We will interfere with a sovereign government when a) there is a lot of oil at stake, or b) they are trying to develop a nuclear bomb but haven’t yet succeeded, or c) the fight will be easy.”

At first glance it looks difficult to set out guidelines for the new world order. Who makes up the “self” in self-determination? What powers should one country or group of countries have over another? Should those powers be invoked when a government speaks hatefully of certain groups, or only after it authorizes soldiers to rape and massacre them? What if those groups are trying to make themselves into citizens of another country?

The only answers that can endure are shared, human, moral answers. It is not hard to find moral answers. They are the ones you would choose if they might apply to YOU. It doesn’t take a Cyrus Vance or Lord Owen to ask, under what conditions would I want outside forces to come to MY aid?

Here, to get the discussion moving, are my answers to that question. What are yours?

I want to choose and correct my own government except when:

– my government makes it impossible for me to do that by restricting my civil rights or making me live in fear of speaking my mind,

– my government systematically persecutes my kind of people with hate talk, blatant economic discrimination, or deadly force,

– my government refuses to meet international obligations (such as environmental or nuclear treaties), the failure of which would endanger me and others,

– my government has broken down and is not functioning,

– a violent dispute in a neighboring country threatens to run over into mine,

– a natural or political emergency has so disrupted the normal economy that the basics of life, especially water or food, are not available to me.

If an intervention is called for, I want it to come from a coalition of nations, not one nation, especially not a neighboring nation. I want it to come when government-sanctioned hate talk begins, not when shooting begins.

I want the intervention to start with verbal warnings and negotiation, but to proceed rapidly, if my government does not respond, to sanctions of increasing severity, announced in advance, applied without hesitation. I want economic sanctions to cut off commodities that enforce power, such as weapons, loans, and luxuries, not necessities, such as food or medicine. If, as in Somalia, life-maintaining commodities become the currency for obtaining weapons, I want stronger intervention.

I want to contribute to my own liberation. I want interveners to understand the power of organized, nonviolent resistance. I want to be consulted and utilized for my understanding of my culture and for my power to resist. If it comes to fighting, I want to help fight.

If there is no alternative to armed intervention, I want it to be swift, decisive, and aimed at the centers of power.

If my leaders have condoned criminal action, I want them to be treated like criminals. I do not want them to negotiate for me. I want any settlement to award me and others what is ours by right, not what has been seized by power.

I want the intervention to end as soon as possible, but not before my people have firm control of our own government.

After many people have contributed to and perfected a list something like this, I want it cast into enforceable language, adopted with solemnity, and applied without discrimination to all governments, on behalf of all people.

Loading 2 comments...