What Parents Don't Know about College Graduation Rates Can Hurt

View this Outlook as a PDF

No. 2, February 2011

You would not buy a car without knowing its mileage, or a home without knowing its age and condition. Why would you invest in a college education without knowing graduation rates? Over the next several months, high school seniors across the country will decide where to go to college. Many students and their parents will base the decision on program offerings, cost, and distance from home, but they may be overlooking a vital piece of information. A recent study[1] of parents choosing between two public colleges found that, when provided with graduation-rate data, 15 percent switched their preference to the school with the higher graduation rate. The information effects were pronounced among less-advantaged parents and those less informed about the college application process. Providing this key information could have real economic implications for students' lifetime earning potential. This Outlook presents the findings of the study and proposes policies to improve consumer information and increase college completion.

Key points in this Outlook:

  • Informing prospective students and their parents of college graduation rates could lead to more college graduates, which would in turn produce increased work-life wages and tax revenue.
  • The Department of Education should require colleges receiving federal aid to report graduation rates on all admissions correspondence.
  • High school counselors and teachers should guide students toward colleges with higher retention and graduation rates.

Spurred on by the Obama administration, and supported by the Bill and Melinda Gates Foundation and the Lumina Foundation, the United States has embarked on a "college completion agenda," aimed at increasing the number of American adults with postsecondary degrees. One barrier to making progress on this front is simple: colleges that admit similar students often have widely different graduation rates, and far too many colleges fail to get a majority of their students across the finish line. As a result, every year hundreds of thousands of students enroll in schools where they will fail to make it through their first year--let alone receive a degree within six years--when they could have enrolled in schools where their chances of success would have been far higher.

While many disagree about the causes of such failure, analysts of all stripes agree that one step toward better college performance is improving the information about quality and costs available to prospective students and their families. The logic behind this argument is clear: if, when faced with the choice between two colleges with different records of success, students and their families had comparable measures of college performance at their fingertips, they would choose the one with the better outcomes. At the individual level, such informed decisions could increase students' probability of successfully completing a degree. At the systemic level, as individual consumers make these choices, lagging colleges and universities would lose students to better-performing ones, and the resulting market pressures would hold institutions accountable for how well they educate, retain, and graduate their students.

The need for improved transparency has not been lost on policymakers. In 1990, Congress passed the Student Right to Know Act, which required all colleges and universities whose students receive federal financial aid money to report graduation rates for first-time, full-time students. The 2008 installment of the Higher Education Opportunity Act pushes the consumer-information front further, mandating that colleges and universities that receive federal student aid must offer a "net-cost calculator" that prospective students and parents can use to estimate out-of-pocket expenses. Students filling out the Free Application for Federal Student Aid (FAFSA) now receive the graduation rates of the schools they list on their application.

While the push to improve consumer information in the higher education market has gained momentum, whether these improvements will lead to increases in degree completion depends on the answer to a basic question: if we provide consumers with better information on quality and costs, will they use it?

We set out to answer this question systematically, using an experiment embedded in a survey of parental attitudes about higher education. We asked a representative sample of one thousand parents of high-school-age children in five states to choose between two public colleges in their state based on their own judgments and information we provided to them. Respondents were randomly assigned to a treatment or control group. Treated respondents received the same set of basic facts as the control group as well as information about each school's six-year graduation rate. Because assignment was randomized and the only difference between the college profiles was the addition of graduation-rate information, any differences in choice behavior can be attributed to providing this basic indicator of school performance.

Overall, we found that providing graduation-rate information increased the probability that parents would choose the institution with the higher graduation rate by about 15 percentage points. Though the experiment was not designed to resemble the actual college search, where preferences are often expressed over more than two schools, we argue that pairwise comparisons make up the basic building blocks of the college-choice process. Moreover, because respondents were asked about real public colleges and universities in their region and received true information about school characteristics, the experiments discussed below have considerable external validity.

Our findings suggest that parents will evaluate schools on the basis of easily understood quality metrics, provided they have access to the information. Institutions of higher education should see this as a positive finding, as it shows that consumers reward schools that outperform their peers, leading to increased enrollments. The results also reveal why we need an array of institutional quality measures, from information on student learning to labor-market success to the rate at which disadvantaged students enroll and graduate. Because prospective consumers will seize on information that helps distinguish colleges from one another, policymakers should ensure that there are a variety of easily understood indicators for these consumers to use.

A Deeper Look at Information, Preferences, and Consumer Choice in Higher Education

To make our choice experiment as authentic as possible, we chose colleges that would realistically be in a prospective student's set of choices. Though the college-choice process is multifaceted and complex for students and their families, overall enrollment patterns suggest key facts about what kinds of schools the majority of prospective students consider, and we based our experiment on those facts. Eight out of ten students attend colleges and universities within their home state[2]--indeed, national data show that most attend schools within one hundred miles of home.[3] Moreover, far more students choose public colleges and universities than private ones, by a margin of nearly three to one.[4] Finally, while most people are familiar with the relentless race for prestige among the elite colleges in the country, these schools are the destination for only a small fraction of the student population. The vast majority of students attend colleges in the middle range of the admissions selectivity scale: about two-thirds of full-time undergraduates are in schools classified as "competitive" or "very competitive" in Barron's Profiles of American Colleges, and of those students, almost two-thirds are enrolled in "competitive" schools.[5]

In short, most prospective students and their families are likely to apply to and enroll in colleges that are public, close to home, and moderately selective. To approximate the alternatives that parents and prospective students are likely to encounter, we constructed the choice sets in our experiment to reflect this reality.

We began with the five most populous states in the country--California, Florida, Illinois, New York, and Texas--and sampled two hundred parents in each state. The samples are representative of the entire state on race and income.[6]

We divided the states into substate "regions" based on the suggestion of experts familiar with the state's culture and university system. Illinois residents, for example, describe the geography of their state in terms of the Chicago area and "downstate." In light of this, we divided Illinois into two regions, one composed of the Chicago area and adjacent counties, and the other composed of all counties south and west of the twenty-four northeastern counties. Likewise, New York is generally divided between New York City and "upstate," and so on for the other three states. Any geographical division makes arbitrary distinctions, but our decisions were made in light of each state's particular geographical layout.

We then paired public state universities that were similar on a set of important characteristics. The paired public colleges shared the same level of admissions selectivity (in nine choice sets both schools were in the Barron's "competitive" category in 2009, and in one they were both in the "very competitive" category). To the extent possible, we paired schools that had similar demographic characteristics and costs of attendance. Given the importance of a college's proximity to home, we also made every effort to choose two institutions that were relatively close to one another and in the same region as the respondent. In nine out of ten choice sets, both colleges were in the same region as the respondent.[7]

There was one key difference between the paired schools: their six-year, Student Right to Know graduation rates as collected by the National Center for Education Statistics (NCES). As our earlier report, Diplomas and Dropouts, documented, the gap between schools within the same state with the same selectivity level can be pronounced.[8] Within pairs, the gaps ranged from 8 percentage points (Northern Illinois versus Eastern Illinois) to 17 percentage points (CUNY City College versus CUNY Queens College), and the absolute graduation rates ranged from 36 percent (CUNY City College) to 65 percent (SUNY New Paltz).

The question is whether providing one subset of respondents with an additional piece of information on student success has an effect on parental preferences. This is not to say that graduation rates are beyond reproach as indicators of institutional performance. As flawed and coarse as they may be, however, six-year completion rates are one of the only comparable measures that exist across four-year colleges and universities. Whether parents respond to this additional information has implications for broader arguments about market accountability through consumer choice across all levels of postsecondary education.

Data and Methods. Our college-choice experiment was embedded in a survey of one thousand parents with children twelve to nineteen years old, at least one of whom had not yet attended college. The survey was administered in the spring of 2010 by YouGov/Polimetrix, an online polling firm that maintains a rolling, representative sample of Americans.[9] The survey asked parents about a range of higher education topics, including their knowledge of the college application process, their aspirations for their children, their knowledge of college costs, and the characteristics they feel are important in a college.

The experiment itself was relatively simple. In the middle of the survey, respondents were told that they would receive information about two different public colleges in their state and that they would be asked which one they would recommend to their child. The prompt also previewed some of the information they would receive and explained the concept of admissions selectivity and the selectivity ratings we used.

Each subject then received the following information in an easy-to-read table with two columns, one for each college: name of the college; location; whether it was located in a city, town, or rural area; admissions selectivity (based on the Barron's guide); number of undergraduates; demographic characteristics of full-time undergraduate students (percent white, percent African American, percent Hispanic, and percent Asian); and cost of attendance (tuition and fees, on-campus room and board, and books).

Respondents in the treatment group received the same information plus information about the six-year graduation rate for each college. Specifically, one row of the table was labeled "six-year graduation rate," and the cells reported "XX% of first-time students graduate in six years or less" for each school.[10]

All respondents were then asked: "If you had to choose one of the following options, which of these schools would you recommend [name of child] apply to?"[11] Respondents then chose their preferred school and moved on with the survey.

The outcome we are interested in is college preference, which can be expressed as the proportion of parents choosing one school over the other. The treatment effect is simply the difference in the proportion of parents who chose the school with the higher graduation rate (which we will call College A) between the control and treatment groups. Put simply, did a larger proportion of parents choose College A, the school with the higher graduation rate, in the treatment group than in the control group?

Graduation-Rate Information Matters. Figure 1 reveals the overall result: parents who are supplied with graduation-rate information are significantly more likely to choose the college with the higher graduation rate than those who do not receive this information, by about 15 percentage points.

EduO No. 2, February 2011 Figure 1 EduO No. 2, February 2011 Figure 1

In the control group, about 46 percent of respondents expressed a preference for College A, revealing a slight bias in favor of College B. This is likely related to the fact that College A was, on average, slightly farther away than the other university. When provided with graduation-rate information, however, 61 percent of respondents preferred the college with the higher graduation rate--a shift of 15 percentage points.

We also estimated the overall treatment effect after controlling for regional differences, and with controls for both region and distance (not reported here). Both methods produced results that were identical to the difference in means in figure 1.

Information Effects across Different Types of Respondents. We also explored whether these information effects varied across different types of respondents. If parental information about colleges is correlated with socioeconomic status (SES), high-SES respondents may be more likely to recognize differences between schools than other consumers. Because sophisticated consumers are already well informed, providing them with one additional piece of information may have little effect on their preferences. Meanwhile, in the absence of substantial knowledge about colleges, parents with less experience in the college process may weigh an additional piece of information much more heavily in their decision making. It could also be the case that some parents, particularly those with high-achieving children, believe that the probability of college completion at a given institution is not indicative of their own student's chances of graduating and may choose to ignore graduation rates as a result.

To explore these questions, we looked at the treatment effect among subsets of respondents, grouping parents by individual-level characteristics such as education, income, and their self-reported level of information about the college application process. In each case, we grouped respondents according to a given characteristic and then estimated separate treatment effects for each subgroup of parents. The results presented here control for the geographic region in which the respondent resides.

Information effects vary across respondents of different backgrounds. In general, respondents with less education, those from lower income brackets, and those who feel uninformed about the college application process exhibited strong information effects, while higher-SES parents and those who reported high levels of college information did not experience a significant shift in preferences. The results also hint at an interesting pattern among respondents of different income and college-information levels that merits further investigation. Highly informed and high-SES parents were already somewhat more likely than their less-advantaged peers to choose College A even in the absence of completion information (though these control group differences are not quite statistically significant, they are evident). Less-informed and lower-income parents in the control group were less inclined to choose College A in the absence of graduation-rate information, but they exhibited strong information effects. As a result of these two patterns, the rate at which parents at different ends of the income and college-information spectrums chose higher-performing College A seemed to converge in the treatment group. Though the current data limit our ability to pinpoint these patterns, they suggest that providing graduation-rate information could lead low-income and low-information parents to express college preferences that look more like those of their high-income and high-information peers.

Consider figure 2, which shows that treatment effects were large and significant among less-educated parents. We grouped respondents according to whether they had a bachelor's degree or above, some college, a two-year degree, or no college education at all. Parents with no college education who were assigned to the treatment group were 1.5 times as likely as their peers in the control to choose the college with the higher graduation rate. The effect was similar for those with a two-year degree. In contrast, for those reporting "some college" or a bachelor's degree and above, the outcomes were not significantly different across the treatment and control groups. The treatment effects among respondents with a high school or two-year degree are so large that their preference for College A surpasses that of their more educated peers.

EduO No. 2, February 2011 Figure 2 EduO No. 2, February 2011 Figure 2

We find sizable and significant information effects for low-income and low-information respondents, captured in figures 3 and 4. We divided respondents into income quartiles, ranging from those making less than $40,000 per year to those making more than $100,000 per year. Parents from the lowest income categories responded significantly to graduation-rate information, and their preference for College A increased by about 24 percentage points in the treatment group. Respondents in the top two income groups responded to the graduation-rate information, but the effects were not statistically significant. As shown in figure 4, parents who reported that they were "not at all informed" exhibited a large shift in preferences between the treatment and control group, while the most informed parents experienced a smaller increase that was not significant. In the case of both income and information, providing graduation rates seems to have a "leveling effect," such that the preferences of the various subgroups look almost identical in the treatment group.

EduO No. 2, February 2011 Figure 3 EduO No. 2, February 2011 Figure 3

EduO No. 2, February 2011 Figure 4 EduO No. 2, February 2011 Figure 4

In general, our findings suggest that graduation-rate information affects the preferences of the least-advantaged parents the most, and that these sizable information effects often narrow differences in opinion that may have existed between parents with different backgrounds. Also remarkable is the stability of preferences among higher-SES parents, which suggests a number of hypotheses about how such parents think about their child's college options. Both of these findings should be investigated in future work.

Economic Benefits of Improving Consumer Information

The results of this parental-choice experiment confirm what should be true: when parents are provided with an additional piece of information that helps them distinguish one college from another, a significant proportion of them make different choices in favor of the higher-performing alternative.

Informing prospective students and their parents could have real economic implications. To see this, consider how this information might have shaped the incoming classes at the twenty public colleges that made up the pairs we presented to parents. There were around sixty thousand incoming undergraduates at these schools in the fall of 2008. When provided with graduation rates for each institution, about 15 percent of respondents switched their preference to the school with the higher graduation rate. This means that around nine thousand students would have enrolled in the higher-performing one based on this information. Given that the average gap in graduation rates among the paired colleges was 11.4 percentage points, we assume that the probability of college completion would be 11 percentage points higher for those nine thousand students. In other words, the shift in enrollment would have led to about one thousand additional college graduates (11.4 percent of the nine thousand switchers).

According to the Census Bureau's Current Population Survey, college graduates earn about $45,600 as a starting salary, while people who report "some college" make just $31,400.[12] In the first year alone, these additional graduates would have earned more than $14 million more than had they attended the lower-performing school. Using standard economic models, over a forty-year work life, one thousand more college graduates would earn over $350 million in additional wages (calculated as what economists call the net present value). We also estimate that, given current federal income-tax rates, these graduates would have contributed an additional $40 million to the American taxpayer.

Though this extrapolation assumes away a lot of complexity about student choice and persistence, these estimated economic benefits are a lower bound. For one thing, there were just over 2 million incoming freshmen in 2009. Moreover, on average the colleges we paired together had graduation rates that were about 11 percentage points apart. But our earlier work reveals that some colleges that admit similar students are separated by 30 or 40 percentage points. Even if the information effect is constant, the additional number of graduates would grow as the gap between graduation rates gets larger.

Improving Policy to Increase College Completion

Our findings are in line with existing research on choice behavior from psychology, behavioral economics, and public policy. They are also consistent with evidence that providing prospective college students with additional information can influence which college attributes they find important.[13] While the experiment is just a snapshot of a much more complicated real-world process, we believe our results constitute a concrete example of how additional information, provided in an accessible and comparable format, can influence preferences.

The results should therefore be heartening to those policymakers and advocates calling for better consumer information in higher education. To that end, we believe our findings have implications for the movement to make information available to parents and students in a way that can help them make better-informed decisions.

Comparability Is Key. As the best marketing and advertising executives know, consumers will seize on information that enables them to distinguish one product from another, provided that information is easily accessible and facilitates comparisons. Consumer Reports has this down to a science; prospective buyers need look at only one or two tables to compare various products on a whole host of indicators.

One concrete policy change that the Department of Education could make immediately is to require all colleges participating in federal student aid programs to report their graduation and retention rates clearly on all admissions and financial aid correspondence with students. Schools that receive federal student aid are already required to report this information to NCES and to make it publicly available, so adding this requirement would not necessitate any additional data collection. Students and parents could then line up their accept-ance letters next to one another and compare institutions on comparable dimensions of student success. Such a "nudge" could pay dividends in the quest to match more students to higher-performing colleges at a low cost to the federal government.[14]

Equipping the Intermediaries. Prospective students and their parents rely on important intermediaries in their college search--guidance counselors and teachers are particularly important. Counselors provide students and their parents with a sense of what colleges and universities are within their reach both academically and financially. As such, it is critical that counselors pass on basic information about college quality and costs to prospective students and parents so that they can make informed comparisons across schools. Counselors should also take note of how popular colleges in their state compare to one another in terms of retention and graduation rates, and then help guide students to those colleges with a track record of success.

High school teachers are also important gatekeepers because they write recommendations, help students with college essays, and mentor students more generally. It would be helpful if these teachers also had a basic sense of which colleges should be highlighted as high-quality choices and which should be avoided. Counselors could help prepare teachers for this role, and teachers could in turn alert counselors when students have particular colleges on their list of choices. Educators and counselors must recognize that not all colleges are created equal when it comes to getting students across the finish line, convey that information to parents and students, and help them avoid those colleges that are not making the grade.

Closing the Information Gaps between Consumers. Our experiment shows that providing easily accessible and understandable information had the most significant effect on low-SES and low-information parents and that it made their choices largely indistinguishable from those made by consumers who are likely to be savvier.

The responsiveness of low-SES and low-information parents gets to the heart of discussions about improving college completion rates and market-based accountability. The institutions that need incentives to improve are not, by and large, the elite public and private institutions that attract the best, most advantaged students. Middle- and low-income families are more likely to attend the broad swath of less-selective colleges, many of which do not graduate most of their students or otherwise lag in performance. These are the parents who would benefit the most from receiving clearer and more reliable information on student outcomes before making application and enrollment decisions. These informed choices would then increase consumer pressure on low-performing schools to improve.

Be Careful What You Wish For: The Case for a Broad Battery of Quality Indicators. We conclude with a cautionary note. The good news is that parents responded to our experimental stimuli, and they did so in predictable ways. That could easily become the bad news if policymakers do not continue to push for a broad range of measures of student outcomes and institutional quality. Parental responsiveness to new information, particularly the pronounced effects among less-informed consumers, immediately raises questions about what information we should provide. To avoid perverse institutional incentives, potentially poor matches between institutions and students, and a short-sighted completion-rate "horse race," policymakers, advocates, and foundations should take pains to create "balanced scorecards" that measure a broad range of student outcomes.[15] Such an effort could collect and publicize rigorous and previously unavailable information on measures of student learning and satisfaction, labor-market success, debt-to-income ratios, return on investment, and any number of other concerns. Parents and prospective students could then weight each attribute according to their own set of priorities, helping ensure that choices are neither uninformed nor driven by one or two indicators of quality or costs.

Andrew P. Kelly ([email protected]) is a research fellow at AEI. Mark Schneider ([email protected]) is a visiting scholar at AEI and a vice president at American Institutes for Research.

View this Outlook as a PDF

Notes

1. Andrew P. Kelly and Mark Schneider, Filling in the Blanks: How Information Can Affect Choice in Higher Education (Washington, DC: AEI, January 2011).

2. US Department of Education, National Center for Education Statistics (NCES), Digest of Education Statistics: 2009 (Washington, DC, 2009), table 222, http://nces.ed.gov/programs/digest/d09/tables/dt09_222.asp (accessed December 16, 2010).

3. Bridget Terry Long's look at the National Education Longitudinal Study found that high school graduates from the class of 1992 were 73 percent less likely to select a four-year institution that was one hundred miles farther away, all else equal. See Bridget Terry Long, How Have College Decisions Changed Over Time? An Application of the Conditional Logistic Choice Model (Cambridge, MA: Harvard Graduate School of Education, January 2003), 15, http://gseacademic.harvard.edu/~longbr/DecisionsOverTime.pdf (accessed December 16, 2010). The Princeton Review's College Hopes and Worries survey shows that a majority of parents would prefer their students to be within 250 miles. See Princeton Review, "Princeton Review 2009 ‘College Hopes and Worries Survey' Findings," 3, www.princetonreview.com/uploadedFiles/Test_Preparation/Hopes_and_Worries/colleg_hopes_worries_details.pdf (accessed December 16, 2010).

4. US Department of Education, NCES, Digest of Education Statistics: 2009, table 190, http://nces.ed.gov/programs/digest/d09/tables/dt09_190.asp?referrer=list (accessed December 16, 2010).

5. Frederick M. Hess, Mark Schneider, Kevin Carey, and Andrew P. Kelly, Diplomas and Dropouts: Which Colleges Actually Graduate Their Students (and Which Don't) (Washington, DC: AEI, 2009), 7, www.aei.org/paper/100019.

6. Our survey was administered by YouGov/Polimetrix, an online polling firm that maintains a nationally representative panel of more than 1.5 million Americans. The firm uses an innovative sample-matching technique to construct representative samples from that overall panel.

7. In the northern region of Illinois, the comparison schools were Northern Illinois University in De Kalb and Eastern Illinois in Charleston. The schools are less than two hundred miles apart, but Charleston falls just outside the regional boundary that we drew between the northern and southern regions. The other Chicago-area schools were not suitable matches for Northern Illinois because of differences in selectivity or large demographic differences.

8. Frederick M. Hess, Mark Schneider, Kevin Carey, and Andrew P. Kelly, Diplomas and Dropouts, 7.

9. For information on YouGov/Polimetrix's methods, please visit their website at http://corp.yougov.com/scientific-research.

10. With the exception of the admissions selectivity rating, all the data on school characteristics were drawn from NCES's Integrated Postsecondary Education Data System (IPEDS), and we used the most recent data (from 2007) available at the time the survey was constructed. The following variables were used to construct the information sets (the full names of the variables are in parentheses):

Campus location: provided for all institutions by IPEDS. (IPEDS 2007)

Degree of urbanization: based on NCES's territory classification, which features four main types: rural, town, suburban, and city. (IPEDS 2008, Degree of Urbanization)

Admissions selectivity: based on a six-point scale of selectivity, ranging from "noncompetitive" to "most competitive" as classified in Barron's Profiles of American Colleges, 28th ed. (Hauppauge, NY: Barron's Educational Series, July 2008).

Undergraduate enrollment: the total number of full-time undergraduate students enrolled as of fall 2007. (IPEDS 2007, Grand Total, Full-Time Students Undergraduate Total)

Student demographics: the percentages of full-time undergraduate students who self-identify as white, black, Hispanic, and Asian in 2007. (IPEDS 2007, White Non-Hispanic Total, Black Non-Hispanic Total, Hispanic Total, Asian or Pacific Islander Total)

Six-year graduation rate: the percentage of first-time, full-time, bachelor's degree-seeking students who completed their bachelor's degree or equivalent within six years of enrollment. (IPEDS 2007, Graduation Rate, Bachelor's Degree within Six Years Total)

Cost of attendance: the combined costs of published in-state tuition and fees, books and supplies, and room and board for students living on campus for full-time, first-time undergraduates. (IPEDS 2006-2007, Published In-State Tuition and Fees 2006-2007 + Books and Supplies 2006-2007 + On-Campus Room and Board 2006-2007).

11. We chose the word "apply" to avoid the possibility that "attend" or "enroll" would eliminate parents whose child was unlikely to be admitted to the schools in question.

12. US Census Bureau, Current Population Survey, Annual Social and Economic Supplement 2009 (Washington, DC, 2010), table PINC-03, www.census.gov/hhes/www/cpstables/032010/perinc/new03_037.htm (accessed February 1, 2011).

13. According to the College Decision Impact Survey by Fastweb, when the Department of Education started posting graduation-rate information as part of "FAFSA on the Web," prospective college students began to rank graduation rates more highly in their list of selection criteria than they had before. See Mark Kantrowitz, "Summary and Analysis of Gainful Employment NPRM," FinAid.org, August 15, 2010, www.finaid.org/educators/20100815gainfulemploymentanalysis.pdf (accessed December 16, 2010).

14. See Cass Sunstein and Richard Thaler, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2008).

15. On the case for balanced scorecards in K-12 education, see Frederick M. Hess and Jon Fullerton, "The Numbers We Need: How the Right Metrics Could Improve K-12 Education," AEI Education Outlook (February 2010), www.aei.org/outlook/100940.

Also Visit
AEIdeas Blog The American Magazine
About the Author

 

Mark
Schneider

 

Andrew P.
Kelly

What's new on AEI

AEI Election Watch 2014: What will happen and why it matters
image A nation divided by marriage
image Teaching reform
image Socialist party pushing $20 minimum wage defends $13-an-hour job listing
AEI on Facebook
Events Calendar
  • 20
    MON
  • 21
    TUE
  • 22
    WED
  • 23
    THU
  • 24
    FRI
Monday, October 20, 2014 | 2:00 p.m. – 3:30 p.m.
Warfare beneath the waves: The undersea domain in Asia

We welcome you to join us for a panel discussion of the undersea military competition occurring in Asia and what it means for the United States and its allies.

Tuesday, October 21, 2014 | 8:30 a.m. – 10:00 a.m.
AEI Election Watch 2014: What will happen and why it matters

AEI’s Election Watch is back! Please join us for two sessions of the longest-running election program in Washington, DC. 

Wednesday, October 22, 2014 | 1:00 p.m. – 2:30 p.m.
What now for the Common Core?

We welcome you to join us at AEI for a discussion of what’s next for the Common Core.

Event Registration is Closed
Thursday, October 23, 2014 | 10:00 a.m. – 11:00 a.m.
Brazil’s presidential election: Real challenges, real choices

Please join AEI for a discussion examining each candidate’s platform and prospects for victory and the impact that a possible shift toward free-market policies in Brazil might have on South America as a whole.

Event Registration is Closed
No events scheduled this day.
No events scheduled this day.
No events scheduled this day.
No events scheduled this day.