Final Project: Digital Strategy for Teacher Preparation Organization

Note: This post is an assignment for Nicco Mele’s Course – Media, Politics and Power in the Digital Age – it summarizes a proposal for my final project.

We know that the most important school-based factor for student success is the effectiveness of their teacher. And yet, the quality of the training we give to teachers before they enter the classroom is often mediocre, with many teachers saying that the preparation programs they attended did not sufficiently equip them to manage a classroom and generate learning in their students. Recently, the question of how we prepare teachers has gotten more of a spotlight. States have been moving to raise the bar on teacher preparation with new licensure and program accountability rules, spurred to action by a report issued by The Council of Chief State School Officers in 2012. The federal government has also jumped on the bandwagon, saying it will issue new teacher preparation regulations soon. Not to be left out, programs are banding together to try to rethink how we train teachers and some are establishing service lines to try spread their innovations to other programs. And Elizabeth Green’s recent book, Building a Better Teacher, aims to raise the issue into the public consciousness, to help us think about what teachers should know and be able to do and how we can get them there.

Yet in spite of this buzz, the actual work of improving how we train teachers, particularly within institutions of higher education, is difficult. We’ve tried it before – Francesca Forzani’s dissertation describes The Holmes Group, which tried and failed to reform teacher preparation in the 1980s and 1990s. What we’ve learned from the past failures is that:

  • there needs to be a shared infrastructure and language around what we expect teachers should be able to do, ways to measure progress towards those goals, and proven strategies to teach these skills;
  • we must build capacity within institutions to bring about these changes;
  • and this work must be incentivized by appropriate government funding and regulation.

This summer, I was helping to get a new organization off the ground that aims to address these issues. The organization – called D4I – will bring together deans from a variety of schools of education to create shared competency maps and assessments, build capacity for change within and across the institutions, and speak with a united voice on policy issues related to teacher preparation.

D4I is just getting off the ground and is still figuring out how to best carry out its work. Given what we know about the power of technology to lower the costs of organizing, collaborating, and spreading information, it seems obvious that D4I should be thinking strategically about how to leverage technology to do its work. D4I’s digital strategy will need to take into account its multi-faceted audience (deans and faculty members at schools of education, policymakers, funders, etc.) and its different objectives (advocacy, infrastructure-building, capacity-building). These various factors will influence the goals of a digital strategy and thus the technologies and tactics that should be used. My aim will be to craft a digital strategy for D4I that is responsive to these factors and that the organization can use as its roadmap as it gets up and running.

Posted in Uncategorized | Leave a comment

The Grand Bargain

The recent controversy at Harvard College about using secret cameras placed in classrooms to document student attendance has raised concerns about surveillance on campuses.  Questions are swirling about whether it was justified in pursuit of better teaching, who should have been told, whether consent was necessary, whether the type of data collected matters, and how it should have been collected, stored, and analyzed.  This controversy is merely indicative of a broader struggle between individuals and institutions over privacy.

Rebecca MacKinnon, in Consent of the Networked: The Worldwide Struggle for Internet Freedom, argues that we implicitly give power to governments to collect our data in exchange for a service – security.  However, we expect that there are limits on the scope of that bargain. What we’re now facing is that our bargain’s boundaries are being tested in new ways due to the reach of technology, the erosion of governmental accountability, and the emergence of private companies as actors in the surveillance.

  • In “The Ecuadorian Library,” Bruce Sterling notes that surveillance and opposition to surveillance have existed for a long time. The difference today is that because we now live so much of our lives online, the activities that can be efficiently monitored by the state have increased dramatically.  In No Place to Hide: Edward Snowden, the NSA, and the US Surveillance State, Glenn Greenwald uses the classified documents obtained by Edward Snowden to show that the scope of surveillance in the US is massive and growing, with the NSA operating under the mantra of “collect it all.”  There are fewer and fewer spaces where we can be outside the reach of monitoring.
  • MacKinnon writes that crucial to our implicit bargain is that the state is transparent and held accountable for how they use data.  But that accountability is eroding in the US, with new laws that grant immunity for companies participating in surveillance, that allow warrant-less monitoring, and with intelligence agencies feeling license to lie about their activities to the bodies that are supposed to be holding them accountable.
  • Emily Parker observes, in Now I Know Who My Comrades Are: Voices from the Internet Underground, that companies like Microsoft, Facebook, and Google, have essentially become part of the public policy apparatus on issues of privacy.  These companies have choices about how to use the data they collect, and many are choosing to cooperate with the government.  Yet they are even less accountable than government is for their role in these policy decisions.

The grand bargain on surveillance, then, is up for renegotiation. But who is at the negotiating table? Most of us can’t be bothered. Instead, it’s Julian Assange, Edward Snowden, Bradley Manning, people that Jaron Lanier describes as “vigilantes,” people who decide to take matters into their own hands.  The problem is that these vigilantes are not impartial and can cause more harm than good. Raffi Khatchadourian’s portrait of Assange in The New Yorker shows that these vigilantes are inherently anti-institutional, believing that only by giving data to individuals can the natural corruption of institutions be stopped, and this is necessary regardless of the destruction caused.

So what’s the answer? Lanier and Khatchadourian call for more accountability for the vigilantes, MacKinnon and Parker call for more accountability for government and corporations and they place the responsibility for that accountability on all of us. But what these writers miss is that accountability is only necessary to the extent that you don’t trust institutions.  If you have complete trust, there is no need for accountability.  This is an issue of institutional trust, then, rather than one of accountability, and the question then changes from “How can we hold institutions accountable?” to “Why don’t we trust our institutions?”

And that brings us back to the situation at Harvard.  Harvard and other educational institutions have long had an implicit bargain with their students and faculty that data would be used to improve the service they are offering.  However, Harvard’s recent use of secret cameras and other initiatives to collect and use student data, such as inBloom in the K-12 setting, have put the bargain up for renegotiation.  What these debates boil down to is trust in school systems – do we trust them to use the data to do the right thing for students? It’s worth considering why, for many people, the answer to that question is no.

Posted in Uncategorized | Leave a comment

Change We Can Believe In?

The 2012 Obama campaign has been heralded as a turning point for leveraging technology effectively in political races.  With the benefit of hindsight, those analyzing the digital strategy of the campaign boil the success down to three things: personalization, experimentation, and empowerment. In the MIT Technology Review, Sasha Issenberg notes that the campaign’s big data strategy was, at the core, a transition from viewing voters as an aggregate to thinking of them as individuals about which they could gather more and more granular data on preferences and responsiveness.  Having the ability to collect, manage, and analyze individual micro data also allowed them to follow the Silicon Valley trend towards A/B testing. Issenberg describes how the campaign embraced this rapid and frequent experimentation to learn on how different people responded to different outreach techniques and messages. Zack Exley diagnoses how the campaign used all this data and new management tools to empower its volunteers. By passing all this information down the campaign food chain, they created a massive, decentralized, yet coherent, field organization that the Romney campaign could not match.

However, while the strategy delivered the results that the campaign hoped for in the short term (Obama’s reelection), I can’t help but wonder if the strategy was a bit myopic, failing to give enough weight to the long-term implications of their choices on brand perception, their volunteer base, new learnings about effective campaigns, and on future supporters.

Brand perception: In a Harvard Business School case, “Obama versus Clinton: The YouTube Primary”, Deighton and Kornfeld discuss how a campaign can lose control over its volunteers and the message they are putting out (e.g., I don’t think the campaign was thrilled with Obama Girl video). Even if it can control its message, a strategy based on driving receptive people towards specific actions (donating, attending an event, voting, etc.) may not be thinking about the impact on the overall brand perception of the candidate, the party, or the office. The decisions being made now about who controls the message and what the messages are about will impact the public perception far beyond the end of the campaign or the length of the term.

Volunteer base: In another HBS case about the 2012 campaign, Piskorski and Winig discuss the risk involved with giving front-line staff the kind of autonomy that the Obama campaign did. If the promise of empowerment turns out to be a myth, you can risk forever alienating the people Exley calls the “new organizers,” losing a whole generation of supporters. The overpromising could make future campaigns face an increasingly disillusioned population.

Learnings about campaigns: Brian Christian, in “The A/B Test: Inside the Technology That’s Changing the Rules of Business,” describes some of the hazards of a culture built on rapid, frequent, incremental experimentation: “No choices are hard, no introspection is necessary.” There is no premium on understanding why something works, just that it does. In the long-run, this may be inefficient for future campaigns; without the reflection that can help uncover the underlying drivers of results, the next campaign could repeat mistakes, head down an avoidable rabbit hole, or miss out on new opportunities to extend the theory to other areas.

Future supporters: The personalization strategy, which focuses on allocating resources to outreach that will drive behavior most efficiently with the fewest resources may also leave some future voters on the table. While an individual may not appear to be immediately responsive to outreach, it’s possible that the effect of outreach may build over time. By targeting only voters seen as responsive, campaigns may neglect to lay the groundwork for individuals who are less responsive in the short term but may convert to engaged supporters in future. By optimizing so exclusively around short term actions, parties could risk losing a bloc that could have been patiently built over time and yielded fruits in future elections. This last one seems most important to me, and yet hasn’t been addressed in much of the retrospective analysis of the campaign.

What does this have to do with education? In education, too, there are lots of fancy new tools, but to what end? The answer may be at the core of the Obama strategy: personalize, experiment, and empower. But in order for it to work, we will need to consider, as the Obama campaign did, how we enable that strategy by investing in infrastructure, defining new roles, training in new ways, and facilitating a cultural shift. And, impatient as we are for quick results, we should keep on eye on what our decisions today imply for teachers, students, and the definition of schooling in the future.

Posted in Uncategorized | Leave a comment

Natural Selection in Newspapers and Universities

In 2009, Clay Shirky wrote a blog post likening the disruption of the newspaper industry to the revolution caused by the printing press. The internet updended how news could be produced, distributed, and monetized, wreaking havoc on traditional news institutions, just as the printing press had contributed to 400 years of political and religious chaos in Europe. Shirky wrote that in these times of revolution – when “the old stuff gets broken faster than the new stuff is put in place” – experiments in new institutional models are crucial. His mantra, “Nothing will work, but everything might,” highlights his perspective that many small experiments could produce a few “turning points” that could provide the new model for journalism.

Two years later, Dean Starkman in the Columbia Journalism Review warns Shirky and other journalism intellectuals that their anti-institutionalist predictions could actually be hastening the demise of the newspaper industry, bringing with it serious negative externalities. Starkman points out that serious investigative journalism requires clout, time, and money, which are what traditional institutions can offer. By encouraging experimentation that commoditizes reporting and values the 24-hour news cycle, Shirky and his “Future of News” compatriots are chipping away at the very institutions that are able to hold power accountable, without providing any guidance as to how they can evolve to preserve this critical function.

Shirky seems to take this criticism to heart in his 2012 report, Post-Industrial Journalism, written with Emily Bell and C.W. Anderson. In it, he acknowledges the importance of public interest journalism and begins to lay out a vision for the future that highlights the importance of both new models and traditional institutions, which can uniquely offer leverage, symbolic capital, continuity, and slack. The report offers recommendations to journalists, outlining a niche that builds on what they can do better than crowds of amateurs and machines. It also urges existing institutions to reexamine the underlying processes and technologies that perpetuate old models and make it difficult to change.

Shirky, Bell, and Anderson emphasize that their recommendations are consciously focused on institutions other than the New York Times, which has drawn a lot of attention but is “a uniquely poor proxy for the general state of American journalism.” The New York Times, they argue, has become a cultural institution of global significance, which provides it more flexibility in the choices it can make to adapt to a new reality. We have seen the resources that the New York Times can bring to bear: how many other newspapers can commission a 6-month internal strategic task force to map out a path towards a digital future?

I would agree: the New York Times is unique. More than any other news institution, it provides leverage, symbolic capital, continuity, and slack. If, as Shirky & Co. posit, traditional institutions are valuable to the extent they provide these four functions, then it is precisely because the NYT has these four qualities in greater concentration, that we should focus on it rather than on the long tail of traditional newspapers. It may be better to spend time on the NYT-like anomalies and let the new models replace the long tail.

We are seeing the same pattern as the internet disrupts higher education. Early on there were bold predictions about the demise of traditional universities, causing widespread anxiety. However, the models put forth to replace them (MOOCs) haven’t quite lived up to expectations. What we’re beginning to see in higher education mirrors newspapers: stratification of traditional institutions, as the top tier leverages its resources and flexibility to evolve, while new entrants make inroads into the space historically held by lower-tier institutions. Harvard has devoted resources to HarvardX, giving faculty the time, space, and resources to reimagine their roles. Just as the New York Times did with its Innovation report, MIT and Stanford have each recently stepped back to think about the futures of their institutions and how their underlying processes may need to change given the evolving external environment. However, it’s unlikely that many institutions beyond those in the top-tier will have the luxury to do this, making them more vulnerable to the disruption of new models. And so just as we identified the four qualities of traditional news institutions that were worth preserving (leverage, symbolic capital, continuity, and slack), we need to ask ourselves what traditional universities offer that new online models cannot provide. If those qualities exist primarily in top-tier institutions, we should focus on helping the Stanfords and MITs adapt, while encouraging experimentation of new models. But if those qualities are equally present in lower-tier institutions, we must do more to help them adapt.

Posted in Uncategorized | Leave a comment

Teacher Education…Can I Get a Definition, Please?

Given how critical teacher quality is to student success and how critical teacher preparation is to the effectiveness of teachers, it is important for the public to have a good grasp on what it means to prepare teachers.

In The Wikipedia Revolution: How a Bunch of Nobodies Created the World’s Greatest Encyclopedia, Andrew Lih notes that “no other reference site comes close in terms of traffic or popularity” to Wikipedia. Google virtually any topic, and the Wikipedia entry will come up near the top of the results page. With that much visibility, a Wikipedia entry has the potential to shape the views of the public around a specific topic. The Wikipedia article on “teacher education” has been viewed by over 20,000 times in the last 90 days. But how good is Wikipedia in providing the public with information regarding teacher preparation? I recently evaluated the article along three dimensions: comprehensiveness, credibility, and usability.

Comprehensiveness = C

The “teacher education” article hits on the basics of teacher preparation, including the general scope and sequence of a teacher education curriculum and program and the types of institutions and programs that train teachers. However, the article needs to do much more to fully describe teacher preparation. First, the article fails to discuss the history and evolution of teacher education, which is necessary context when trying to understand why teacher preparation looks the way it does now. Second, the attention to teacher preparation policy is insufficient. In the United States, program approval, accreditation, and licensure policies at the state and federal level shape teacher education and there has been increasing attention to those policies. Third, there is little discussion about how teacher preparation differs across countries. Fourth, there have been a number of reform efforts recently in the domain of teacher education that are worth mentioning in order to give readers a more up-to-date view of the landscape and the different actors in the space. Finally, the article would benefit from specific examples of well-known and / or effective teacher preparation programs.

Credibility = B-

In order to ensure the credibility of the information, the article needs to both convey neutrality as well as be pulled from reputable sources. The article does a good job of highlighting a few of the ongoing debates in teacher education – such as how to assess teacher quality, what knowledge and skills teachers should get during their training, and what actually constitutes teacher education – and of keeping a relatively neutral stance on each of them. However, the article could benefit from fleshing out the arguments in order to provide a more complete picture of the differing opinions. The article draws on academic papers and well-respected sources, such as publications by the American Educational Research Association. However, given the breadth and depth of research on teacher education, the source list seems relatively slim. In addition, the article does not cite sources published after 2009. In order to raise the credibility of the article, it needs to incorporate more up-to-date research.

Usability = A-

One of the most important features of a Wikipedia article is whether it is accessible to the broad population; articles will lose readers if they are written poorly or the formatting is a distraction. The “teacher education” article, thankfully, does not suffer from these issues. It is well-written: clear, succinct, and does not fall into too much technical education jargon. The article adheres to the style conventions of Wikipedia so that the reader can navigate it with familiarity – the introduction at the top is concise, the different sections are clearly identified with appropriate headers, and there are clear citations and suggested links to other relevant topics. It could perhaps be useful to include graphical representations of the different components of teacher education programs (coursework and field experiences) or of the different phases of teacher education (from pre-service to induction to professional development), but in general, the average reader should find the article easy to read and absorb.

In education, we often complain about the public not engaging in an informed conversation around the issues. However, if we want people to engage, we should think about easy ways that we can ensure that they have comprehensive, credible, and usable information. If we want the general public to start thinking about how we can better train and prepare teachers, we should meet them where they are: Wikipedia.

Posted in Uncategorized | Leave a comment

How to Be a Portal 101

In Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives, Christakis and Fowler note that our online social networks are extensions of our offline social networks; the way we behave in these new spaces reflect fundamental human tendencies. However, the technology undergirding these online networks has vastly increased the type and frequency of interactions available to us, as well as the potential scale of our networks. As Howard Rheingold notes, in Net Smart: How to Thrive Online, we can communicate and share information with many more people, enhance connections with those closest to us, and build exponentially more “weak ties” – the random, distant links that can help us find jobs and learn new things. And Rheingold argues that as networks grow, the value derived from them shifts from the efficient delivery of services to eventually the facilitation of group affiliations.

Yet Rheingold also argues that the universe of benefits made possible by the explosion of our online networks are not guaranteed. He writes, “What you know, as always, can make the critical difference between being exploited or alienated by your use of social media, and enriching your life and community by your use of the same media.” To thrive in the new world of Facebook and Twitter, Rheingold says people will need to understand what it means to be a “portal”: which ties are most valuable, how to cultivate them, and how to navigate in spaces where the boundaries are fuzzy and what you put online is increasingly permanent. If we aren’t thoughtful about artfully crafting our social networks, we could end up in what Eli Pariser describes as a “filter bubble,” insulated by algorithms from perspectives that are uncomfortable, yet important. If we can’t successfully shift to a new paradigm of privacy (or lack thereof), we may need to do what the Europeans have done, and beg the court system (and then Google) to allow us to hit a restart button on our online profiles.

I agree wholeheartedly with Rheingold about the staggering opportunities made available by online social networks. Applied to education, we have only just begun to scratch the surface of what online social learning means. MOOCs, or Massively Open Online Courses, are a hot topic in education right now (if tech journalists are to be believed, this month has seen both the demise and the resurgence of MOOCs). But the MOOCs getting the most attention – Udacity, Coursera, edX – are xMOOCs, a specific type of MOOC that enables efficient content delivery to many learners at once through an online platform. xMOOCs sit squarely in Rheingold’s conception of a network in its earliest stage, where value is derived from the linear delivery of services. cMOOCs, on the other hand, are less well-known, but leverage what Rheingold views as the advantages of more advanced social networks – they allow for decentralized learning experiences where learners co-construct their knowledge through peers they are affiliated with online. (Here’s more on the distinction between xMOOCs and cMOOCs).

One reason that cMOOCs may not have taken off as quickly as xMOOCs is that participation in them is difficult. I am taking a class this semester (Massive: The Future of Learning at Scale, taught by Justin Reich) that aspires to give us insight into MOOCs, partially by an “immersion in the technologies of large scale learning.” We have spent multiple weeks learning, reflecting on, and practicing what it means to be a self-directed learner co-constructing knowledge with peers online. My takeaway: it’s hard to know how to contribute productively to a community and to parse out useful and relevant learning from a firehose of information!

Rheingold is right: we must be smart and thoughtful participants in our online social networks if we are to reap the potential benefits they offer. But to be the type of portal that Rheingold envisions requires a whole new set of skills and competencies. And that’s where my critique of Rheingold comes in. We talk about becoming a portal, but how do we get there? Some of us may figure this out on our own, as Rheingold seems to think. But many of us won’t. It takes practice and reflection to understand how to be a portal, and there are lasting consequences for people who get it wrong. We need to be systematically teaching kids how to be good portals. If we don’t, the kids who can figure it out on their own will zoom ahead, leaving the others behind.

Posted in Uncategorized | Leave a comment

Here Comes Everybody…Slowly

In Here Comes Everybody: The Power of Organizing Without Organizations, Clay Shirky makes the case that new social technology has fundamentally changed the way we communicate with each other, which in turn has had a profound impact on our ability to collaborate and take collective action. Shirky argues that as technologies such as email, Facebook, Twitter, Flickr, etc. have lowered the cost of communication, it has democratized communication so that any individual can both consume and create content. By removing the barriers to communication, it has also made it easier to organize, coordinate, and manage groups of people, which are then able to take on more and more complex tasks without needing formal institutional support from a organization. This has allowed loose affiliations of people who may have never met before take on massive undertakings together, like create a living encyclopedia or start revolutions.

On these points, I believe Shirkey is right. Social tools have dramatically lowered the cost to form groups and use those groups to do complex work. However, while Shirky pays lip service to the corresponding social changes that must occur to take full advantage of these technological advances, I think he underestimates the importance of the social and cultural context in which these tools can be applied, which has a bearing how quickly and successfully different fields will see transformation as a result.  Shirky acknowledges that social changes and changes in behavior are important (and in fact are the outcome ultimately desired), but that these social changes often lag technological changes. He points out that until the social changes catch up to the technology, there is likely to be chaos in the system as norms, rules, processes, behaviors, and attitudes are renegotiated. But the length and level of chaos and the degree of change which occurs is likely to be very dependent on the context of the specific field.

In more complex fields, with many interrelations and interdependencies, these tools are likely to cause behavioral and social shifts with many and far-reaching unintended consequences. Justin Reich has a great article about this particular point in the context of education; because of the complexity of the education system, new learning technology can introduce all sorts of unintended consequences that are extremely difficult to predict and could have ramifications far into the future.  These unintended consequences will likely prolong the chaos in the transition from the old institutions to the new equilibrium.  

Aside from complexity, it’s also important to factor in the importance of technical expertise to the quality of the output. In fields where the reliance on technical expertise is high, these new tools may have a more difficult time gaining traction both because of an assumption and a reality that mass amateurization doesn’t produce an acceptable quality level.  In thinking about education, online learning tools and courses have been heralded as a revolution in education. And it’s true that we can now access TED Talks, Khan Academy, Wikipedia, and all sorts of other content objects that can help us gain new knowledge.  However, we also know a lot about what it takes from a pedagogical perspective about how to most effectively ensure that learning happens and that it sticks…and it’s hard.  The majority of learning content developed by the mass of amateurs certainly doesn’t reflect that understanding of pedagogy.  Over time, as we come to realize that much of the user-generated content in education does not actually provide the quality of learning that we hope for, the resistance to social and behavioral change may increase, making it less likely that the technological change will stick in a meaningful way.  

As Audrey Watters notes, we have become obsessed with the myth of disruptive innovation (particularly in education), in which the old is destroyed and the new is adopted.  However, Ithere are specific contextual factors in education that may impede, or at least slow dramatically, the adoption of the new. While Shirky has certainly made his case that social technologies are changing the way that we communicate and collaborate, we should remember that the pace of true institutional change in education is likely to be slow.

Posted in Uncategorized | Leave a comment