Update 4/25 and bumped due to changes: Thanks to Greg Ketcham and Robert Knipe, I have replaced the 2009 interim proposal document with the updated advisory team report. This changes the intro blurb, description of 9 inter-dependent components, and list of contributions below.
I have been surprised at how little interest the Open SUNY announcement last week generated in educational media and blog discussions. Perhaps the MOOC portion of the story, which was prominent in several headlines, caused people to assume this was just another school trying to jump on the bandwagon. What is significant, however, is that one of the largest statewide systems in the country is making a multi-pronged approach to reduce time-to-graduation and therefore lower student costs.
In brief, Open SUNY is part of the system’s agenda to expand access to public higher education by leveraging existing programs or experiments already in place at member campuses or at the system level, and it has strong ties to Open Educational Resources (OER) concepts. The concept for the strategic plan originated in 2009, eventually leading to the report Getting Down to Business: Interim Report of the Chancellor’s Online Education Advisory Team released in December 2012 [updated].
The Advisory Team recommends “Open SUNY” be officially adopted as the name of SUNY’s new online learning initiative. The term Open SUNY represents an opening up of the educational opportunities that SUNY can provide through the enhancement of existing—and development of new—online education resources, courses and degree programs.
Open SUNY has the clear potential to establish SUNY as the preeminent and most extensive online learning environment in the nation by providing affordable, high quality, convenient, innovative, and flexible online education opportunities for the citizens of the State of New York and beyond. As a collaborative online educational network, the Open SUNY Online Consortium (SUNY campuses and SUNY system offices) will draw on the Power of SUNY to connect students with faculty and peers from across the state and throughout the world, and link them to the best in research-based online teaching and learning environments, practices, and resources. Dedicated to providing access to open and online learning opportunities, Open SUNY will connect learner and community needs and will allow the State University of New York to bring this concept to scale like no other college, university, or system in the United States.
What is Open SUNY?
Open SUNY is a set of 9 interdependent components, as described by the advisory team report [updated]
1. Open SUNY Online Consortium - Comprised of courses from SUNY campuses across the system taught by SUNY faculty, the Open SUNY Online Consortium will collectively offer the most extensive array of online courses and degree programs in the country. This unified approach to online education will provide learners with cost effective options to compete with the rising costs of higher education and enable students taking courses across multiple SUNY institutions to receive financial aid from their home institution.
2. Open SUNY Degree - The term Open SUNY degree refers to functional coordination of policies and practices that “systemness” will allow for, not the actual degree conferrals that are the role of the campuses. The Office of the Provost will seek out campuses to offer new, high needs, online degree programs that will not necessarily require the host campus to develop or provide all the necessary courses to meet credit requirements to confer a degree.
3. Open SUNY Complete - Open SUNY will lead a SUNY-wide project to support degree completion for students who seek to return to college after a significant absence (commonly referred to as ”stopped out”). The Open SUNY Complete program will identify and support former students who wish to return to SUNY to earn and complete a degree. This will occur through use of market analyses and outreach to students who are now considered beyond the normal reach of the originating enrolling college, using a variety of cooperative strategies between SUNY institutions. tate University of New York Chancellor’s Online Education Advisory Team Interim Report 4
4. Open SUNY Resources - Open SUNY Resources will build on existing digital repositories, making vast amounts of high quality, credible material available to faculty and learners, while simultaneously staking ground as a world leader in creating new resources by leveraging the vast expertise available across SUNY disciplines.
5. Open SUNY PLA (Prior Learning Assessment) - Increasingly, people acquire and assimilate knowledge both internal and external to the academy. Recognition of the latter, when applied toward college level learning, provides greater access to higher education, decreased time to degree completion, increased retention and completion rates, and significantly lower costs to students. Open SUNY PLA will provide services to campuses that do not wish to establish their own prior learning assessment processes.
6. Open SUNY Workforce - A SUNY-wide strategy for the use of online learning in support of workforce development and adult/continuing education can strengthen SUNY’s role as an economic driver throughout NYS and provide access to SUNY higher education specifically for potential employees, employees and employers statewide (and nationally, who will be attracted to all that SUNY and New York have to offer).
7. Open SUNY International - Open SUNY International will provide a network for learning by linking faculty and students from around the world, demonstrating SUNY’s commitment to international education. In partnership with the Office of Global Affairs, Open SUNY International will provide new opportunities for SUNY students to engage in international and intercultural learning.
8. Open SUNY Research - Open SUNY Research will continue a long tradition of scholarship related to innovation, student access, and learning in open and online environments. Previous support from the Office of the Provost has fostered an active and ongoing research and development agenda with more than 150 conference papers, book chapters, peer-reviewed journal publications, monographs, and presentations directly related to SUNY Learning Network and online education initiatives. Open SUNY Research expands this work and will be supported by a combination of SUNY-wide innovation grants, external funding, formal initiatives, advisory group efforts, and campusbased research activities.
9. Open SUNY Learning Commons - The Open SUNY Learning Commons will be a set of technology applications and online environments to support all Open SUNY services and components. Facilitating communication across campuses, the Learning Commons will bring the user-friendliness of social media applications to the SUNY community. It will leverage advanced open source and commercially available online learning tools, while building communities of practice for students and faculty.
Open SUNY funding comes from a $18.6m funding from NY2020 legislation, and will eventually cost (according to estimates) $3.35m per year in operations.
The plan was announced during the SUNY Chancellor’s State of the University address on January 15, 2013. One of the goals of Open SUNY, according to the Chancellor is to expand access to public higher education:
Launch of Open SUNY in 2014, including 10 online bachelor’s degree programs that meet high-need workforce demands, three of which will be piloted in the fall. Open SUNY will leverage online degree offerings at every SUNY campus, making them available to students system-wide using a common set of online tools, including a financial aid consortium so that credits and aid can be received by students across campuses. Chancellor Zimpher said Open SUNY enrollment will reach 100,000 students within three years, making it the largest online education presence of any public institution in the nation.
On March 19, 2013, the Board of Trustees endorsed the plan. One of the motivations for this move was to coordinate campus efforts and gain system-wide synergies, as described by Ry Rivard at Inside Higher Ed. One of the key targets for the online expansion will be non-traditional adult learners.
SUNY Chancellor Nancy Zimpher wants to consolidate online course offerings after nearly 20 years of institutional independence.
“I think the problems the country is trying to solve simply cannot be solved one institution at a time,” Zimpher said in a recent interview. [snip]
SUNY began its online efforts in 1994 at Empire State College. Now, there are 150 online degree programs scattered across all its campuses. SUNY’s extensive offerings are, as it has said in documents related to its new effort, “fragmented” – the source of “countless unexplored opportunities for collaboration, economies of scale and innovation.”
Zimpher ultimately wants to enroll 100,000 new online students in the next several years while also adding new degree programs to train New Yorkers for industries with job openings. To reduce costs to students, she is also trying to speed degree completion times in online degrees to three years.
The chancellor said the whole online effort will target adults.
“We have all these adults who have some education but not enough,” she said. “We’re really trying to grow a major enrollment in an underserved population.”
Ry Rivard’s article also highlights potential pushback from the faculty unions.
A spokesman for the union that represents SUNY academics and instructors said the union had not been consulted about the push.
“SUNY hasn’t brought us into the conversation, hasn’t consulted us,” said Don Feldstein, spokesman for United University Professions, which represents about 32,000 SUNY employees.
SUNY spokesman David Doyle said the system had consulted with faculty by appointing some of them to a task force and by talking to faculty through the “appropriate governance channels,” such as the faculty senate.
How Will We Know?
The part of innovation that I don’t see mentioned enough, at least in the proposal and press releases, is a structured method of determining what works and what doesn’t work. The proposal does mention the metrics that should improve if Open SUNY is successful, but these are all at the initiative level, and not at the individual innovation level [updated].
The impact of Open SUNY will be measured by its contributions to:
- Enhancing and supporting academic excellence of faculty and students;
- Reducing the time required for degree completion;
- Reducing the overall cost of obtaining a SUNY degree;
- Meeting workforce and societal needs;
- Increasing SUNY completion rates;
- Increasing the number of online learners;
- Enhancing the profile of SUNY as an innovative leader in teaching and learning;
- Continuing to reduce a collective carbon footprint; and
- Increasing student and faculty international engagement through online interaction.
Some of these are laudable goals (reducing time to degree and overall cost, increase completion rate), but some are ill-defined (improved outcomes) and some are questionable (increased number of online learners as a goal rather than means to a goal, and enhancing the profile).
But a deeper problem is lack of discussion on determining which innovations to diffuse and which innovations to keep from diffusing. Perhaps there are plans for evaluating courses and programs, but there are no details available that I can find.
Focus on Spreading Innovations, not Creating Innovations
SUNY, of course, is not the first place to develop MOOCs, online courses, OER, open courseware or PLAs, so what is important about this announcement? I think the significance lies in SUNY’s scale and SUNY’s approach. SUNY appears to view the Open SUNY program as a method to spread educational innovations throughout one of the largest systems in the country rather than creating a new pilot program or experiment. SUNY has 468,000 students and plans to add 100,000 more. Rather than trying to create a new innovation, the role of the system is to foster innovation and then take the best ideas and make them available to all.
Although it’s not getting enough attention, Open SUNY will have an outsized impact on the future of online education in the US. State-wide initiatives, whether driven by the systems or the state government, are becoming one of the biggest factors in how higher education is changing in the US. I suspect that other states will be watching SUNY and adopting this model in part or in whole.
Pay attention to Open SUNY – it will matter.
Further reading in chronological order:
- SUNY Strategic Plan, “The Power of SUNY”, 2010
- Associated Press, “SUNY seeks to establish a ‘cradle to career’ future for its graduates”, April 13, 2010
- Empire State College, “Open SUNY Final Proposal” from 2012
- CNY Central, “SUNY Chancellor reveals ambitious agenda”, Jan 15, 2013
- USA Today, “State University of New York pushing online classes”, Jan 15, 2013
- Education News, “Open SUNY Will Mark New York’s Push into Online Education”, Jan 22, 2013
- Open SUNY Press Release, “SUNY Board Outlines Implementation of Open SUNY”, March 19, 2013
- Buffalo Business First, “Online courses to be available across SUNY system”, March 20, 2013
- Chronicle of Higher Education, “SUNY Signals Major Push Toward MOOCs and Other New Educational Models”, March 20, 2013
- Online Colleges, “State University of New York Embraces Online Learning with Open SUNY Initiative”, March 22, 2013
- e-Literate, “SUNY and the Expansion of Prior Learning Assessments”, March 26, 2013
- Inside Higher Education, “Economies of Online Scale”, March 27, 2013
Update 4/02: Fixed editing mistake to say “SUNY, of course, is not the first place to develop . . . “
Last week California SB520 – the bill aiming to create a pool of online availability of 50 high-demand lower-division courses for which the public systems would have to award credit – was amended based on ongoing discussions and negotiations. The fact that the bill has been amended is not surprising, as this is the intent of the legislative process.
The themes of the amendments are to:
- shift the approval of the pool of online courses from the California Open Access Resources Council (COERC) to the administration and faculty senates of the three systems (University of California, California State University, and California Community Colleges);
- tie the administration of the program to the California Virtual Campus;
- restrict each course to matriculated California public higher education and qualifying K-12 students;
- tie the provisions of the bill to funding in the Annual Budget Act; and
- remove any tie to American Council on Education recommendations.
Amended Bill Language
Below are some of the key changes to the bill, with markups (red strikethrough text for deletions, blue for additions).
This bill would establish the California Online Student Access Platform under the administration of the California Open Education Resources Council President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments. The bill would require the platform, among other things, to provide an efficient statewide mechanism for online course providers to offer transferable courses for credit and to create a pool of these online courses. The bill would require the council, among other things, President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments, to develop a list of the 50 most impacted lower division courses, as defined, at the University of California, the California State University, and the California Community Colleges that are deemed necessary for program completion or fulfilling transfer requirements, or deemed satisfactory for meeting general education requirements in areas defined as high-demand transferable lower division courses under the Intersegmental General Education Transfer Curriculum and, for each of those 50 courses, to promote the availability of multiple high-quality online course options, as specified.
The bill would establish the California Student Access Pool, through which students could access online courses, and would require the online courses approved by the council President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments, under the bill to be placed in this pool the California Virtual Campus. The bill would require that matriculated students taking of campuses of the University of California, California State University, or California Community Colleges, and California high school pupils, who complete online courses available in the pool and achieving developed through the platform and achieve a passing score on corresponding course examinations, be awarded full academic credit for the comparable an equivalent course at the University of California, the California State University, or the California Community Colleges. Because Colleges, as applicable.The bill would provide that funding for the implementation of this provision would be provided in the annual Budget Act, and express the intent of the Legislature that the receipt of funding by the University of California for the implementation of this provision be contingent on its compliance with its requirements. Because this provision would require community colleges to award academic credit under these circumstances, it would constitute a state-mandated local program.
Section 1 is the findings and declarations portion of the bill, and changes include a focus on faculty partnership.
(e) California could significantly benefit from a statutorily enacted, quality-first, faculty-led framework that increases partnerships between faculty and online course technology providers aimed at allowing students in online courses in strategically selected lower division majors and general education fields to be awarded areas to take online courses for credit at the UC, CSU, and CCC systems. While providing easy access to these courses, these systems could also continually assess the value of the courses and the rates of student success in utilizing these alternative online pathways.
Section 2 is the major addition to California legislation if enacted, adding section 66409.3 to the Education Code. The phrase “in partnership with faculty members of the University of California, the California State University, and the California Community Colleges,” has been added in several areas. Some key section changes
(c) For purposes of accomplishing all of the objectives of the platform as specified in subdivision (b), the California Open Education Resources Council President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments, shall do all of the following:
(1) (A) Develop a list of the 50 most impacted lower division courses at the University of California, the California State University, and the California Community Colleges that are deemed necessary for program completion or fulfilling transfer requirements, or deemed satisfactory for meeting general education requirements. requirements, in areas defined as high-demand transferable lower division courses under the Intersegmental General Education Transfer Curriculum.
(B) For purposes of this paragraph, “impacted lower division course” means a course in which, during most academic terms, the number of students seeking to enroll in the course exceeds the number of spaces available in the course.
(2) (A) For each of the 50 courses identified under paragraph (1), solicit and promote appropriate partnerships between online course technology providers and faculty of the University of California, California State University, and California Community Colleges which, by the fall term of the 2014–15 academic year, shall result in the availability of multiple high-quality online course options in which students may enroll in that term.
(B) An online course developed pursuant to this paragraph shall be deemed to meet the lower division transfer and degree requirements for the University of California, the California State University, and the California Community Colleges.
The amendments stipulate that faculty must be associated with each course, and enrollment is limited to matriculated California students.
(3) Create and administer a standardized review and approval process for online courses in which most or all course instruction is delivered online and is open to any interested person. When
reviewing for matriculated students of the University of California, California State University, and California Community Colleges, or for California high school pupils. No course shall be approved for purposes of this section unless the course has associated with it a faculty sponsor who is a member of the faculty of the University of California, the California State University, or the California Community Colleges.
In a significant change, any reference to recommendations coming from the American Council on Education have been removed.
(G)Includes content that has been reviewed and recommended by the American Council on Education.
Courses will be listed in the California Virtual Campus and budgeting applied through the Annual Budget Act.
(d) Online courses approved by the California Open Education Resources Council through the platform pursuant to this section shall be placed in the California Student Access Course Pool, which is hereby created Virtual Campus, through which students may access the courses. Students taking A matriculated student of a campus of the University of California, California State University, or California Community Colleges, or a California high school pupil, who completes an online course available in the California Student Access Course Pool and achieving developed through the platform and achieves a passing score on the corresponding course examination shall be awarded full academic credit for the comparable an equivalent course at the University of California, the California State University, or the California Community Colleges, as applicable.
(e) Funding for the implementation of this section shall be provided in the annual Budget Act. It is the intent of the Legislature that, notwithstanding Section 67400, the receipt of funding by the University of California for the implementation of this section be contingent on its compliance with the requirements of this section.
Response to Faculty Senate Pushback
Many of these changes appear to be in response to faculty senate pushback. The CSU faculty senate “voted unanimously to take a formal position of oppose unless amended with regard to SB 520″. The biggest concern from faculty centered on the involvement of California Open Access Resources Council (COERC), which they felt removed authority over curricula and bypassing existing quality measures in the three systems.
The part of the faculty senate pushback that goes to the intent of the bill, and therefore was not included in any amendments, is their concern over the applicability of online education to lower-division courses.
More specifically, the ASCSU has serious concerns about increasing access to California’s higher education system for lower division students through the use of online courses of study. CSU is a leader in online course delivery for upper division and graduate students. However, research has shown that online courses are not as effective for lower division students, underprepared students, or lower income students. Targeting lower division courses for online delivery puts these very students at greater risk for failure rather than facilitating their access to academic success.
These changes are very recent, so it remains to be seen the affect of these changes on faculty support or resistance.
The post Amendments of California SB520 Bill for Online Courses appeared first on e-Literate.
For any online program in the US that enroll students from more than one state, the issue of the Department of Education’s State Authorization proposed regulations is a major issue. WCET has played a leading role in raising awareness on the issue as well as pushing for a solution. From their summary page (read the whole page for a summary of the timeline, pushback, state regulations, etc):
On October 29, 2010, the U.S. Department of Education (USDOE) released new “program integrity” regulations. One of the regulations focused on the need for institutions offering distance or correspondence education to acquire authorization from any state in which it “operates.” This authorization is required to maintain eligibility for students of that state to receive federal financial aid. Institutions have until July 1, 2014, to have obtained the appropriate approvals. Meanwhile, institutions are required to demonstrate a ‘good faith’ effort to comply in each state in which it serves students. While the regulation has been ‘vacated’ by court order, we believe it will be reinstated.
To give an idea of the issues, consider that Missouri charges institutions $5,000 – $25,000 fees to register in the state, and there is a burdensome process. While not all states are as expensive as Missouri, the costs and overhead add up quickly, and there are conflicting and inconsistent requirements from state to state. According to a survey from UPCEA, WCET and Sloan-C, one third of online programs have not applied to any states outside their home, despite the serving a median of 37 states. Furthermore State Authorization rules would stifle online education programs and is already causing many programs to reject students in certain states.
Despite losing in court (the ruling was vacated), the Department of Education still plans on pushing forward and planning to revive State Authorization.
The most promising approach to dealing with this situation is the State Authorization Reciprocity Agreement (SARA).
The backbone of the Commission’s recommendations is a system of interstate reciprocity based on the voluntary participation of states and institutions to govern the regulation of distance education programs. Participating states will agree on a uniform set of standards for state authorization that ensure that institutions can easily operate distance education programs in multiple states as long as they meet certain criteria relating to institutional quality, consumer protection, and institutional financial responsibility (further described below). Participating institutions must be authorized by their “home state” (which is, presumptively, the institution’s state of legal domicile).Once designated, the home state should have responsibility for authorizing the institution for purposes of interstate reciprocity and be the default forum for consumer complaints.
WCET has a summary post up by Russ Poulin that describes the latest report and commission meeting on SARA.
A national meeting on next steps in state reciprocity was held in Indianapolis on April 16 and 17. The purpose of the event was to serve as an initial introduction to representatives from each state about next steps in reciprocity.
The session focused on the report: Advancing Access through Regulatory Reform: Findings, Principles, and Recommendations for the State Authorization Reciprocity Agreement (SARA) that was recently released by the Commission on the Regulation on Postsecondary Distance Education. The Commission, which is a committee formed by APLU (the land-grant universities) and the State Higher Education Executive Officers, built upon the work of previous efforts of the Presidents’ Forum/Council of State Governments and the regional higher education compacts. You can see a short history of state authorization and the reciprocity efforts on our web page.
Russ goes on to describe support from ACE and even Hal Plotkin from the Department of Education:
While the Department of Education cannot formally endorse the work, he brought a two-word message from the Secretary Arne Duncan and Under Secretary Martha Kanter: “thank you.”
There is also a summary of the key questions being considered, including accreditation affects, fees for institutions participating in SARA, determination of Home State, and impact of the 25% rule (more on that one in a future post).
In short – this is an important issue to track, and WCET has some excellent resources to help online programs stay up-to-date.
The post Summary from WCET on State Authorization Reciprocity Agreement appeared first on e-Literate.
Instructure took another step this past week to establish Canvas as a true learning platform, moving beyond the traditional bounds of an LMS. The company announced the upcoming release of the Canvas App Center, scheduled for availability at the same time as their annual users confer in June, which will allow end-user (read faculty and students) integration of third-party apps.
I wrote about the trend of the market moving towards learning platforms last year.
In my opinion, when we look back on market changes, 2011 will stand out as the year when the LMS market passed the point of no return and changed forever. What we are now seeing are some real signs of what the future market will look like, and the actual definition of the market is changing. We are going from an enterprise LMS market to a learning platform market.
What I mean by ‘enterprise LMS’ is the legacy model of the LMS as a smaller, academically-facing version of the ERP. This model was based on monolithic, full-featured software systems that could be hosted on-site or by a managed hosting provider. A ‘learning platform’, by contrast, does not contain all the features in itself and is based on cloud computing – multi-tenant, software as a service (SaaS). [emphasis added]
The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from edu-apps.org, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).
The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.
What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.
Not a New Idea, Just Taking Concept to Application
The idea of having the ability to easily integrate multiple applications into a learning environment is not new. SUNY Learning Network (SLN) was working on the Learning Management Operation System (LMOS) concept back in the mid 2000s (where Michael was one of the key drivers behind this initiative), but the LMOS implementation did not pan out. Patrick Masson, another key player in the initiative, went on to UMassOnline after SLN and has been instrumental in creation of the Needs Identification Framework for Technology Innovation (NIFTI) to enable local adoption of learning tools. The general desire to support easy integration of apps also lead to the LTI specification.
What has not been available, however, is the empowerment of end users to make these decisions without going through the IT department or LMS system administrators.
IMS global is also talking about the need for an educational app store, as described in Rob Abel’s blog last week.
For those of us that have been attending Learning Impact the last several years (and, yes, don’t forget to sign up right now for this year’s because space is getting short!), we already know what the future of the “LMS” is (and that the term LMS is a bad name for what it has been or what it will be). We also know what the general roadmap for digital learning resources is and how this evolution is intertwined with the evolution of the LMS. That’s because the LMS is evolving into a disaggregation of features and resources that come together easily and seamlessly for the needs of teachers and students.
The post also announced the IMS plans to support development of an app store to be available in a few years.
Can universities and school districts control their own online “store” of educational content and applications for easy access and use by students and faculty? Yes they can – and they will in only a few short years. Will such an “app store” be based on Apple, Google or Amazon? No it will not.
The “take it or leave it” proprietary vertical integration strategies of consumer-oriented providers of digital books and applications, that maximizes their ability to create revenues from sales of such resources, have left educational institutions with a conundrum. Do we dare dictate to our students and teachers a “preferred platform?” Of course, the answer to that question needs to be “no.”
What is not apparent, however, is whether the Canvas App Center will be seen as friend or foe with the IMS effort. The Canvas effort will be ready years before the proposed IMS effort, it is offered for free, the apps are built on LTI, and the API for the app is itself open-source. But . . . it will be run by a vendor.
Update: Clarification provided by Rob Abel here in the comments. Short answer – IMS does not see Canvas App Center as a threat but as a very positive development; there is concern over language of “LTI compliant” apps that are not cross-platform compatible.
Who’s In Control?
The closest vendor-based effort to the Canvas App Center is probably xpLor from Blackboard, which Michael described in this post. This cloud-based platform is not technically an app store model, but it does enable standards-based content and applications to be shared with the core LMS from a cloud-based platform. xpLor appears to be focused more on packages of content, grouped learning material and communities of interest. Despite some of the similarities, xpLor focuses more on institutional decision-making and system administrator control, whereas the Canvas App Center focuses more on easy access to consumer-based tools for faculty, students or system administrators.
From the press release:
“We want to tear down the walled garden that has plagued the LMS market,” Instructure co-founder and CPO Brian Whitmer said. “Third party integrations have existed, but they’ve required the IT department to make them work. With Canvas App Center, we want to let anyone install an app with one click and begin personalizing their learning experience with these tools.”
Tired of Waiting
While the core concept is not new, and as seen by IMS plans is not unique, the significance of the Canvas App Center and the corresponding edu-apps site is in making the idea much more of a reality. Brian Whitmer created a slideshare with audio that gives more detail on the announcement, including a description of Instructure’s frustration that educational technology is still not an ecosystem. I recommend the slideshare to people wanting to get more of a UI-based explanation of the concept.
This attitude exhibited by Instructure – focus on consumer-based tools and desire to implement basic concepts in a quick fashion – matches their pedigree as a venture-capital backed company with a startup mentality.
I believe that the App Center will significantly push forward the adoption and importance of LTI, but it is not clear whether the benefits will only affect Canvas customers or actually push the LMS field further into a learning platform market. As with all pre-announcements, a great deal of the impact will depend on the actual implementation of the new software.
One other factor to watch will be whether Canvas institutions can (or should) adjust to the paradigm shift of enabling faculty and student adoption of pre-integrated tools. Concerns over data security, standardization and loss of control could cause some schools to take a cautious stance towards the app center.
And now for this week’s version of “do you notice which publications are not covering this story”:
- PR Neswire (official press release), “Instructure Announces Canvas App Center”
- TechCrunch, “Instructure Launches App Center To Let Teachers, Students Install Third-Party Apps Across Learning Platforms”
- CampusTechnology, “Canvas App Center Brings 1-Click Access to LMS Add-ons”
- InformationWeek, “Canvas LMS Maker Launches Open Education Apps Directory”
- PandoDaily, “Instructure launches open Canvas App Store to turn education into an ecosystem”
The post Tear Down This Wall(ed Garden): Canvas App Center to Offer End User Control Over Apps appeared first on e-Literate.
Editor’s Note: I am pleased to announce that Bill has agreed to continue contributing blog posts from time to time. Therefore, he is now officially a “Featured Blogger” rather than a “Guest Blogger.”
Last week, I had the privilege of speaking at a workshop on online graduate education. At that workshop, Carnegie Mellon University Provost and Executive Vice President Dr. Mark Kamlet used the words “Learning Engineering” in his keynote which I built upon in my talk. In my previous post I referenced the need of semantic data and algorithms to support learning engineers to create and iteratively improve courses and courseware (among other things). I felt it was worth taking a little time to describe just what I believe that means.
For over 10 years, the Open Learning Initiative has been bringing together teams to develop online course materials. Carnegie Mellon is an ideal place to cultivate this work due to its multi-disciplinary programs and culture aside from its expertise in the related fields. During that time we’ve built a team of experts that are critical to the building of learning environments informed by research and capable of recording data for iterative improvement as well as creating dynamic reports for stakeholders.Discovering Learning Engineering
At OLI, we have followed a path that was outlined by CMU professor Herb Simon, Noble Laurate:
“Improvement in post secondary education will require converting teaching from a solo sport to a community based research activity.”
If you’ve seen someone from OLI speak more than once, you’ve seen this quote and might be tempted to gloss right over it. But it’s worth considering closely, particularly in this context. We have found that the best way to build effective learning environments is to regularly convene faculty, software engineers, usability specialists, learning scientists, and others.
What does it take then to be someone who can sit at the center of this kind of diverse group and produce an online learning environment that has a successful outcomes? We’ve admittedly struggled with this question as we’ve grown as a project. It turns out that part of what we were missing was trying to shoehorn people with existing skill sets into a role that is really what we’ve come to call the learning engineer.Engineering Learning? You Bet.
Starting with the source of all knowledge, I look to how Wikipedia defines engineering:
Engineering is the application of scientific, economic, social, and practical knowledge, in order to design, build, and maintain structures, machines, devices, systems, materials and processes. It may encompass using insights to conceive, model and scale an appropriate solution to a problem or objective. The discipline of engineering is extremely broad, and encompasses a range of more specialized fields of engineering, each with a more specific emphasis on particular areas of technology and types of application.
I can’t think of a better way to describe what it is we ask our learning engineers to do. But I work with them every day. So let me draw a rudimentary comparison: Imagine a more “traditional” engineer hired to design a bridge. They don’t revisit first principles to design a new bridge. They don’t investigate gravity, nor do they ignore the lessons learned from previous bridge-building efforts (both the successes and the failures). They know about many designs and how they apply to the current bridge they’ve been asked to design. They are drawing upon understandings of many disciplines in order to design the new bridge and, if needed, can identify where the current knowledge doesn’t account for the problem at hand and know what particular deeper expertise is needed. They can then inquire about this new problem and incorporate a solution.
In this way, a learning engineer applies learning science and what is known about other relevant disciplines (user experience, for example) and pedagogy to problems developing learning environments. When designing for platforms that collect semantic data they understand the requirements of the materials they are creating and can ensure that the data collection that will be done will provide actionable results. This does not mean a learning engineer has to understand the intricacies of the algorithms that operate on data, but they need to have a sufficient understanding of the needs of that data collection.
In one way, this type of engineering is more rapid and responsive that “traditional” engineering. We can learn from the delivery of the “built bridge” just what parts are effective and what parts need improvement. (This requires semantic data in order to discern). In the comparison I’ve made, one doesn’t usually go back and make a bridge better unless something terribly wrong comes to light. Here we can monitor and continually improve our previous work as well as apply those lessons forward to new developments.
That addresses lessons learned “in the field” (practice informing sciences). In the other direction (sciences informing practice), the comparison is harder to make. If some critical flaw is discovered one might go back and “patch” a bridge. For a learning engineer, revisiting work is not a rare occurrence but an expected iterative improvement process. Thus, a learning engineer must be aware of ongoing research in related fields and stay current with our understanding of how to teach effectively. We’ve only begun to understand teaching and learning in scientific ways and cannot rest on what we know so far. Learning engineering then, as a field, is really about developing processes and methodologies to support this work.
One good point made to me by a workshop attendee after my talk: if a bridge falls down, you know about it. In the world of online education where rich evaluation is rare, we don’t even know if our bridges are falling down.Something We’ve Needed All Along?
Although the work to advance online education has been the spark that has made obvious the need for collaborative efforts and individuals who can work in those highly interdisciplinary teams, I refer back to the quote at the opening. Simon wasn’t saying online education required the conversion of how we teach. It just so happens that it is now obvious. If we’re truly honest with ourselves, not all experts make the best teachers. This is not to say that top-tier institutions with high-caliber faculty aren’t offering a great opportunity to students by providing access to leading researchers. (“Minds rubbing against minds” as it were). But those leading researchers are not guaranteed to be the best teachers, especially when they’re often handed a course to teach as a secondary requirement to their role that they may not be interested in.
Some shared experiences of undergraduates everywhere:
- I thought I understood the lecture, but I don’t know where to start on this homework!
- That midterm came out of nowhere – I didn’t understand it.
- I read the chapter as told but then the lecture made no sense to me.
These are the result of poor alignment in objectives, practice and assessment, which is already known to be important. This is the kind of insight and experience that the most brilliant minds can benefit from when it comes to teaching the novice. (See also the expert blind spot).
A learning engineer works with content experts and guides their work and brings in other points of view as needed in order to best develop learning experiences – it just so happens that now we really need them even more for the online experience.How to Find a Learning Engineer
The reality is that right now individuals with such skill sets are hard to find “in the wild” and it will be some time before that changes dramatically. What is required is to find talented people interested in the work who already have some of the skills needed. It could be someone with a strong learning science background who is interested in seeing immediate practical application of their work, or someone with a strong instructional design background interested in learning how to apply learning science and data analytics to what they do, and moving those groups together. That model does provide a way to find candidates and acknowledges the fact that some effort has to be made to develop the skill sets of a learning engineer upon hiring.
I do not believe this is a case of looking for what in the software world you’d refer to as a unicorn. It really is vital to all of us in education to develop a workforce of people who understand how the creation of learning material happens as well how to apply developments happening in the understanding of how to effectively develop and test those materials.Aren’t Learning Engineering and Instructional Design the Same?
This reminds me of when I started my career as a programmer. When I started programming, I was a software developer and not a software engineer. I knew how to write code, but I wasn’t ready to architect it or account for other disciplines in my work. A similar comparison applies here. The role of a learning engineer is not a support role, but a full contributor and participant in the process of developing an online learning environment. I asked one of our learning engineers how she viewed her role, to which she said “We want to learn about learning – what makes rich, deep, meaningful and lasting impact.” She builds environments that report data so her work can be evaluated, not to ask if she did a good job, but to learn how we might improve upon what we know to better the environment.
A learning engineer is a part of the process that improves or expands the technologies they work with. An instructional designer is often handed a suite of available technologies and content and told to make something of it. A learning engineer works both pedagogically and technologically to improve, create and make a whole experience and then evaluate the effectiveness of it with data.An Essential Field
Learning engineering is part of what drives the success at OLI and is going to drive the development of well-informed online environments going forward anywhere such work is being done in the future We believe this is an important area to define and then expand.
With that in mind, I leave you with a work in progress statement attempting to capture the key aspects of this field. (I already know it’s not easy to read, especially out loud in a talk without stopping to get your breath!) But I’m interested in hearing what others think of the content of this sentence. It doesn’t get into some of the practical implications I outline above but hopefully it captures the essence of the idea.
Learning Engineering: The development, evaluation and improvement of the processes, methodologies, and educational technologies that lead to predictable, repeatable development and improvement of learning environments which leverage learning science and the affordances of technology to address instructional challenges and create conditions that enable robust learning and effective instruction.
The post The Need For Learning Engineers (and Learning Engineering) appeared first on e-Literate.
As Phil mentioned in his last post, he and I had the privilege of participating in a two-day ELI webinar on MOOCs. A majority of the speakers had been involved in implementing MOOCs at their institutions in one way or another. And an interesting thing happened. Over the course of the two days, almost none of the presenters—with the exception of the ACE representative, who has a vested interest—expressed the belief that MOOCs provide equivalent learning experiences to traditional college courses. Keep in mind, these folks were believers. They were enthusiastic about MOOCs in general. But they tended to describe the value of MOOCs as reaching a different audience than the traditional matriculated college student and provide a different value. They talked about it extending the university mission. By and large, they did not talk about it as being an improvement on, or even equal to, a traditional class. Now, there were well over 400 participants, so it wouldn’t be fair of me to say that there was unanimity, about this point or any other. But the level of agreement was remarkable.
On the other hand, there was widespread enthusiasm for using MOOCs as essentially substitutions for textbooks in classes that included instructors from the local campus. Vanderbilt created what they called a course “wrapper” around a Coursera MOOC on machine learning. Folks from Stanford talked about the notion of a “distributed flip,” i.e., a group of flipped classrooms participating together in a MOOC. And SJSU talked about using an edX course in a blended course environment on one hand, and a Udacity course with Udacity-provided “course mentors” on the other.
The obvious conclusion is that MOOCs are more of a threat to textbook companies than they are to universities. I think that’s true, but I also think it’s an oversimplification. There is a deeper (and older) trend to boil down a course into a set of digital artifacts that can be “played” by the student at will. It’s worth taking a deeper look at that trend, where it’s going, what’s useful about it, and what’s pernicious about it.The Course as an Artifact: A Brief History
Course artifacts, in and of themselves, are hardly new. In fact, the textbook as a collection of catechisms (or questions and answers designed to facilitate memorization) goes back to at least the 4th Century A.D. Basically, the catechisms were the course. We tend to think of these being used in what we would call primary school today, but in fact, this sort of text-as-course was used at all levels of education. For example, check out the Catechism of the Steam Engine, published in the 1850s.
In the modern higher education context, there is a strong sense among many teaching faculty of themselves as craftspeople. In this view, they teach their courses their own way and use their unique strengths and knowledge to benefit their students. The degree to which this rhetoric matches reality varies wildly depending on the individual instructor, the level and subject of the course, and the school at which the course is being taught. There is a tendency among instructors of lower-division courses to follow the textbook pretty closely, including the homework and quizzes, and decorate that pre-packaged curriculum with lectures—particularly in courses that are easily machine-graded and tend to have very large enrolments.
This is not to say that the instructors and TAs in these classes add zero value over the textbook content. One of the most important but least valorized functions that an instructor serves in the class is providing support to students when they are stuck—answering questions, modeling good problem-solving skills, providing mentoring about study skills, and so on. Likewise, the curation that these faculty do in terms of picking the books, selecting the problem sets within the book, and so on, provides real value. (And this is a spectrum, rather than a binary distinction between faculty who just follow the book completely and faculty who make up their own curricula completely.) But the point is that much of what we refer to as the “course” is often a packaged up in a set of artifacts that come from the textbook publishers and are augmented by pre-packaged performances of lectures by the professors. The degree to which this sort of thing happens is just hidden from view because it happens behind the closed doors of individual classrooms.
When the LMS first came onto the scene in the late 1990s, the one artifact that every professor would put online if they were putting up just one would be the syllabus. Then they might add lecture notes, and then possibly some readings. None of that really changed anything, since it was still happening behind the virtual closed doors of the LMS course logins. But in 2002, when MIT announced their OpenCourseWare initiative, the conversation began to change. Even though the process of adopting OpenCourseWare wasn’t essentially different from one of adopting a textbook publisher’s book and ancillaries, MIT’s brand imprimatur carried with it a sense of superiority in some quarters. Why would you, a professor at Unremarkable College, think that your course design is better than the famous MIT professor’s? On the one hand, it felt to me at the time like there was a strong undercurrent of elitism in these conversations. On the other hand, it raised the useful question of when the instructor is crafting the course curriculum to meet the particular needs of the students in the room versus when she is crafting it in order to satisfy her own creative needs as a craftsperson. But even here, OCW ultimately didn’t do much to disrupt the Order of Things. At most, OCW courses are recipes that can be adopted either in whole or in part by the instructors, and how they are adopted is still mostly kept behind closed doors.
Meanwhile, the textbook publishers were combining their textbooks—now online—with their ancillary materials and their homework platforms into a kind of higher-end courseware that goes a few steps beyond what you can get from a typical OCW package. Whether we are talking about Cengage MindTap, Pearson CourseConnect, or WileyPLUS, these product packages basically provide the curriculum, the course materials, the assessments and, in some cases, the analytics to track student progress and make study suggestions. Yet still, these are adopted mostly behind the closed door of the classroom.Enter the MOOC
In some ways, the xMOOC in its current form is this trend to turn the course into an artifact taken to its logical conclusion (possibly ad absurdum). Course lectures are now artifacts in the form of videos. Assignment and assessment functions are packaged into machine-graded tools. Certification of knowledge is provided by the machines as well. Yes, there are still class discussions, and yes, the course instructors do participate sometimes, but they appear to be rather secondary in most of the xMOOC course designs I have looked at. In general, xMOOCs tend to explore the degree to which the pedagogical function can be fulfilled by artifacts.1
One critical difference is that, by raising the question of whether this package is worthy of being offered for credit, the MOOC also is forcing us to begin to articulate the value instructors add—both that they can in principle and what they are adding in practice today in large survey courses under the conditions that they are often taught. This is a big and complex question. It’s far too big to address fully in one post. But I think the conversations that happen in places like the ELI webinar about what MOOC instructors think is missing from MOOCs that keeps them from being credit-worthy is an interesting first approximation at an answer. The sentiment articulated by some of the ELI webinar participants, which was echoed by a presentation at this week’s MOOC colloquium at RPI, is that xMOOCs don’t tend to be able to get at deep skill acquisition because students have limited opportunities to either see those skills modeled for them or to practice them. As Jim Hendler put it during the RPI colloquium, “I don’t hear a lot of talk about using MOOCs to give students PhDs.” To be clear, I don’t believe that it is impossible to give that kind of deep skill learning in an online context; nor do I think that today’s giant lower-division survey courses do a whole lot to teach deep skills, by and large. But I do think that the gut reactions that folks in the MOOC conversations seem to be having is revealing in terms of the limits of what we think we can achieve at the moment with the course as a product—whether that product is instantiated through a MOOC or through an instructor “teaching” a traditional survey class and going through the motions as described to him by his textbook vendor. To the degree that a graduate seminar as a MOOC seems like a strange idea to you, ask yourself what would be missing and whether that missing element also belongs in our undergraduate survey courses.The “Distributed Flip” and Other Amazing Feats
Equally revealing, in my view, is the significantly higher level of enthusiasm among MOOC veterans for using MOOCs as course materials for blended learning. But not just any blended learning. Two themes have been coming up repeatedly: flipping the classroom and collaboration between professors teaching the same class. You can get a clear sense of what’s going on from this guest column on The Chronicle’s “Professor Hacker” blog by Douglas Fisher of Vanderbilt University, who used a MOOC as the basis for his flipped class:
I now view MOOCs, and the assessment and discussion infrastructure that comes with them, as invaluable resources that I embrace and to which I add value. I, and I am guessing many others, are short steps away from full-blown customizations of individual courses and even entire curricula, drawing upon resources from around the world and contributing back to those resources.
The implications of MOOCs for community between faculty and students, as well as the relationships within and between local and global learning communities, interest and excite me. In fact, it is a nuance on the theme of community that I think is most responsible for my excitement as I embrace online educational content. For the first time in 25 years of teaching, I feel as though I am in a scholarly-like community with my fellow educators. I have long regarded scholarship as the noblest aspect of academia– the scholar’s tenacity in identifying, acknowledging, addressing and building on the intellectual contributions of others. I have not experienced the same profound sense of community among my colleagues in the education realm, however – I have largely been a lone wolf. Now there has been a profound shift in my mindset – I use and build on the educational production of others; I do it openly on public sites, of which I am proud rather than embarrassed; I contribute back, and my students see and learn from this practice of scholarly appreciation, and are even encouraged to contribute to it through their own content creation and sharing. This opportunity for “scholarship” in educational practice is what, as an educator and scholar, I find most exciting about this nascent and exploding online education movement.
I think the point about the missing community around teaching is particularly critical. The aforementioned RPI professor Jim Hendler, who was recognized by Playboy Magazine as “one of the nation’s most influential and imaginative college professors” who are “reinventing the classroom,”2 talked about how he struggled to flip his classroom in a way that his students would embrace and lamented that he had no training in pedagogy. Later in his presentation, he talked about how university libraries and computer labs, which used to be places where students would go and solve problems together, are largely empty now. I wondered about how college education would be different if professors had shared problem-solving spaces for their teaching, like the study carrels and computer centers of yore, and if there were no stigma attached to sharing.
Meanwhile this week, San José State University announced the creation of the Center for Excellence in Adaptive and Blended Learning, the first project of which will be to teach faculty at 11 other CSU campuses how to use an edX course on circuits and electronics as the basis for a flipped class. It’s a short step from training faculty on how to flip a class using the MOOC to a “distributed flip,” where those faculty members are sharing best practices with each other as they teach the same class using the same class using the same materials, and having their students interact with each other on the MOOC discussion board. This is promising.
It also raises questions about the MOOC course designs. At RPI, I was able to ask edX’s Howard Lurie about whether the course design for the blended classes in the SJSU project will be the same as the fully online one. He acknowledged that there would have to be a variant. We’re going to see more of that. To the degree that MOOCs are going to used in this way, they need to (1) have the curricular wrap-around that scaffolds the classroom use, and (2) be designed to be modular so that faculty using them in their own classrooms can customize them to the local needs of their students. In other words, we need to be able to draw different and more flexible lines between where the course-as-artifact ends and human-directed course experience begins. Which, by the way, is essentially what I think Adrian Sannier was saying in his interview with me a while back when he positioned OpenClass courses in contrast to MOOCs:
“Somebody will make a math class with 6 million students around the world. But it will be offered locally with teachers at a scale of between 1 to 20 and 1 to 50. Because teachers matter.”
And this is where we get to the part of the MOOCs competing with the textbook vendors. Both MOOC producers and textbook vendors are beginning to converge on a product model of courseware that is more of a complete curriculum than a traditional textbook but less of a stand-alone, autopilot course than a current-generation xMOOC. Both groups have a lot to learn creating flexible designs that make the right compromises between completeness and ease of localization as well as facilitating communities of practice among teaching faculty. But it’s clear that’s where we’re headed.
- This is in contrast with cMOOCs, which tend to explore the degree to which the pedagogical function can be fulfilled through crowdsourcing among the students.
- No, there wasn’t a photo spread.
Today Asahi Net International acquired the Sakai Division of rSmart. rSmart CEO Chris Coppola will join the Ashai Net International Board creating interlocking boards. The financial arrangements are not known.
rSmart is a well known contributor to Apereo Inc.’s Sakai learning management system and to the Kuali suite of administrative software applications. rSmart has enhanced, implemented, and supported Sakai. It has also implemented the Kuali Financial System for colleges and universities.
This reorganization of effort may represent the changes in higher education: The relentless promotion of online learning and demand for more productive administrative systems is being advocated as a solution to the rising cost of higher education in developed countries.
“Worldwide over 350 educational organizations [have confirmed use of ] Sakai as a learning management system, research collaboration system and ePortfolio solution.” The actual number is much higher. (Sakai users are not required to register their use and the Sakai software does not automatically report its use to a central site). A new version of the Sakai software, Sakai OAE, is expected soon. It passed intense performance testing late last year and now can serve a large number of students. The market for Sakai support and Sakai as a “cloud” application is accelerating as colleges and universities continue to expand online education.
Ashai Net International’s parent company, Japan-based Ashai Net Inc. developed a learning management system, called manaba, beginning in 2007. The company offers manaba cloud-based learning services to 190 colleges and universities.
Georgetown University’s East Asia National Resource Center and Harvard College’s Japan Initiative are manaba users. Manaba support is led by Tomoka Higuchi McElwain M Ed, a Stanford University trained educator. Presentations on the use of manaba were recently made at Educause 2012, 20th International Conference on Computers in Education (ICCE 2012), and the Association of International Education Administrators 2013 Conference.
In April 2011, Asahi Net International, Inc. was established as a New York company. It was founded to support the growing international use of manaba.
In August 2012 parent Asahi Net, GSV Capital and others invested $10.75 million in rSmart. GSV investment advisor Michael Moe advised. At that time Moe expressed confidence in the firm saying “rSmart is helping universities realize lower total cost of ownership and higher-quality products that are easy to use.” Moe was a principal investment advisor for the last wave funding higher education—private for-profit colleges and universities.
CEO Takashi “Take” Takekawa is the President and CEO of manaba – Asahi Net International, Inc. He remains on the Board of Directors of rSmart. “Take” received an MBA from Harvard Business School.
The Kuali Foundation developed the community source model where colleges and universities would cooperatively develop administrative software and make it open source so higher education as a whole could benefit from that investment. rSmart CEO Chris Coppola was an early and vigorous supporter of that community model.
rSmart Board Chair John Robinson has a long and successful history founding and developing companies providing administrative software to colleges and universities. He founded Information Associates, later acquired by Sungard SCT. Robinson describes his commitment: “A large part of my job is spreading the word, helping open source become the business model for the development and distribution of software in the higher-education marketplace. The most gratifying aspect of this work is seeing the open-source community grow in education and collaborating with so many exceptional people.” Kuali has benefited from his “working with community leaders in education.”
Currently, the Kuali Foundation has four open-source software product lines: finance, research administration, IT infrastructure—called Rice, and student systems. rSmart is a Kuali commercial affiliate supporting all four software systems. The number of installations is shown in the Figure.
The Kuali Foundation has developed and supports research administration software—called Coeus. This is a higher education-specific application based on the Massachusetts Institute of Technology system of the same name. This system is becoming mission-critical for research universities as federal funding shifts emphasis, and licenses are expected to be increasing potential revenue for research universities. rSmart can be expected to benefit from their immediate need.
The two organizations combine talented staff long committed to higher education. They have a reservoir of experience and knowledge. This has been shown by their contribution to open-source software- products designed specifically for higher education. Both companies have been and should continue to be successful without yielding control to organizations that have different values. Higher education should be pleased with today’s announcement.
Edited by Paul Heald, Sigma Systems Inc.
Correction: Data received today reported the manaba network serves 190 institutions globally. This correction has been made. Combined the Asahi Net International will be supporting 230 academic institutions serving 550,000 students.
The post Asahi Net International Acquires the Sakai Division of rSmart appeared first on e-Literate.
Last week, edX made a splashy spectacle of an announcement about automated essay grading, leaving educators fuming. Let’s rethink their claims.
“Give Professors a break,” the New York Times suggested in this joint press release from edX, Harvard, and MIT. The breathless story weaves a tale of robo-professors taking over the grading process, leaving professors free to kick back their feet and take a nap, and subsequently inviting universities, ever-focused on the bottom-line, to fire all the professors. If I had set out to write an article intentionally provoking fear, uncertainty, and doubt in the minds of teachers and writers, I don’t think I could have done any better than this piece.
Anyone who’s seen their claims published in science journalism knows that the popular claims bear only the foggiest resemblance to academic results. It’s unclear to me whether the misunderstanding is due to edX intentionally overselling their product for publicity, or if something got lost in translation while writing the story. Whatever the cause, the story was cocksure and forceful about auto-scoring’s role in shaping the future of education.
I was a participant in last year’s ASAP competition, which served as a benchmark for the industry; the primary result of this, aside from convincing me to found LightSIDE Labs, is that I get email; a lot of email. I’ve been told that automated essay grading is both the salvation of education and the downfall of modern society. Naturally, I have strong opinions about that, based both on my experience with developing the technology and participating in the contest, as well as in the conversations I’ve had since then.
Before we resign ourselves to burning the AI researchers at the stake, let’s step back for a minute and think about what the technology actually does. Below, I’ve tried to correct the most common fallacies I’ve seen coming both from articles like the edX piece, as well as the incendiary commentary that it provokes.Myth #6: Automated essay grading is reading essays
Nothing will ever puzzle me like the way journalists require machine learning to behave like a human. When we talk about machine learning “reading” essays, we’re already on the losing side of an argument. If science journalists continue to conjure images of robots in coffee shops poring over a stack of papers, it will seem laughable, and rightly so.
To read an essay well, we’ve learned for our entire lives, you need to appreciate all of the subtleties of language. A good teacher reading through an essay will hear the author’s voice, to look for a cadence or rhythm in writing, to appreciate the poetry in good responses to even the most banal of essay prompts.
LightSIDE doesn’t read essays – it describes them. A machine learning system does pour over every text it receives, but it is doing what machines do best – compile lists and tabulate them. Robotically and mechanically, it is pulling out every feature of a text that it can find, every word, every syntactic structure, and every phrase.
If I were to ask whether a computer can grade an essay, many readers will compulsively respond that of course it can’t. If I asked whether that same computer could compile a list of every word, phrase, and element of syntax that shows up in a text, I think many people would nod along happily, and few would be signing petitions denouncing the practice as immoral and impossible.Myth #5: Automated grading is “grading” essays at all
Take a more blatantly obvious task. If I give you two pictures, one of a house and one of a duck, and asked you to find the duck, would you be able to tell the two apart?
Let’s be even more realistic. I give you two stacks of photographs. One is a stack of 1,000 pictures of the same duck, and one is a stack of 1,000 pictures of the house. However, they’re not all good pictures. Some are zoomed out and fuzzy; others are way too small, and you only get a picture of a feather or a door handle. Occasionally, you’ll just get a picture of grass, which might be either a front lawn or the ground the duck is standing on. Do you think that you could tell me, after poring over each stack of photographs, which one was a pile of ducks? Would you believe the process could be put through an assembly line and automated?
Automated grading isn’t doing any more than this. Each of the photographs in those stacks is a feature. After poring over hundreds or thousands of features, we’re asking machine learning to put an essay in a pile. To a computer, whether this is a pile of ducks and a pile of houses, or a pile of A essays and a pile of C essays, makes no difference. The computer is going to comb through hundreds of features, some of them helpful and some of them useless, and it’s going to put a label on a text. If it quacks like a duck, it will rightly be labeled a duck.Myth #4: Automated grading punishes creativity (any more than people do)
This is the assumption everyone makes about automated grading. Computers can’t feel and express; they can only robotically process data. This inevitably must lead to stamping out any hint of humanity from human graders, right?
Well, no. Luckily, this isn’t a claim that the edX team is making. However, by not addressing it head-on, they left themselves (and, by proxy, me, and everyone else who cares about the topic) open to this criticism, and haven’t done much to assuage people’s concerns. I’ll do them a favor and address it on their behalf.An Extended Metaphor
Go back to our ducks and houses. As obvious as this task might be to a human, we need to remember once again, that machines aren’t humans. Presented with this task with no further explanation, not only would a computer do poorly at it; it wouldn’t be able to do it at all. What is a duck? What is a house?
Machine learning starts at nothing – it needs to be built from the ground up, and the only way to learn is by being shown examples. Let’s say we start with a single example duck and its associated pile of photographs. There will be some pictures of webbed feet, an eye, perhaps a photograph of some grass. Next, a single example house; its photographs will have crown molding, a staircase; but it’ll also have some pictures of grass, and some photographs might be so zoomed in that you can’t tell whether you’re looking at a feather or just some wallpaper.
Now, let’s find many more ducks, and give them the same glamour treatment. The same for one hundred houses. The machine learning algorithm can now start making generalizations. Somewhere in every duck’s pile, it sees a webbed foot , but it never saw a webbed foot in any of the pictures of houses. On the other hand, many of the ducks are standing in grass, and there’s a lot of grass in most house’s front lawns. It learns from these examples – label a set of photographs as a duck if there’s a webbed foot, but don’t bother learning a rule about grass, because grass is a bad clue for this problem.
This problem gets to be easy rather quickly. Let’s make it harder and now say that we’re trying to label something as either a house or an apartment. Again, every time we get an example, the machine learning model is given a large stack of photographs, but this time, it has to learn more subtle nuances. All of a sudden, grass is a pretty good indicator. Maybe 90% of the houses have a front lawn photographed at one point or another, but since most of the apartments are in urban locations or large complexes, only one out of every five has a lawn. While it’s not a perfect indicator, that feature suddenly gets new weight in this more specific problem.
What does this have to do with creativity? Let’s say that we’ve trained our house vs. apartment machine learning system. However, sometimes there are weird cases. My apartment in Pittsburgh is the first floor of a duplex house. How is the machine learning algorithm supposed to know about that one specific new case?
Well, it doesn’t have to have matched up this exact example before. Every feature that it sees, whether it’s crown molding or picket fence, will have a lot of evidence backing it up from those training examples. Machine learning isn’t a magic wand, where a one-word incantation magically produces a result. Instead, all of the evidence will be weighed and a decision will be made. Sometimes, it’ll get the label wrong, and sometimes even when it’s the “right” decision, there’ll be room for disagreement. But unlike most humans, with a machine learning system we can point to exactly the features being used, and recognize why it made that decision. That’s more than can be said about a lot of subjective labeling done by humans.Back to Essay Grading
All of the same things that apply to ducks, houses, and apartments apply to essays that deserve an A, a B, or a C. If a machine grading system is being asked to label essays with those categories, then machine learning will start out with no notion of what that means. However, after many hundreds or thousands of essays are exhaustively examined for features, it’ll know what features are common in the writing that teachers graded in the A pile, in the B pile, and in the C pile.
When a special case arrives, an essay that doesn’t fit neatly into the A pile or the B pile, we’d have no problem admitting that a teacher has to make a judgment call by weighing multiple sources of evidence from the text itself. Machine learning learns to mimic this behavior from teachers. For every feature of a text – conceptually no different from poring over a stack of photographs of ducks – the model checks whether it has observed similar features from human graders before, and if so, what grade the teacher gave. All of this evidence will be weighed and a final grade will be given. What matters, though, might not be the final grade – instead, what matters is the text itself, and the characteristics that made it look like it deserved an A, or a C, or an F. What matters is that the evidence used is tied back to human behaviors, based on all the evidence that the model has been given.Myth #3: Automated grading disproportionately rewards a big vocabulary
Every time I talk to a curious fan of automated scoring, I’m asked, “What are the features of good writing? What evidence ought to be used?” This question flows naturally, but the easy answers are thoughtless ones. The question is built on a bad premise. Yes, there are going to be some features that are true in almost all good writing, with connective vocabulary words and transition function words at the start of paragraphs. These might be like webbed feet in photos of ducks – we know they’ll always be a good sign. Almost always, though, the weight of any one feature depends on the question being asked.
When I work with educators, I recommend not just that they collect several hundred essays. I ask that they collect several hundred essays, graded thoroughly by trained and reliable humans, for every single essay question they intend to assign. This unique set allows the machine learning algorithm to learn not just what makes “good writing” but what human graders were using to label answers as an A essay or a C essay in that specific, very targeted domain.
This means that we don’t need to learn a list of the most impressive-sounding words and call it good writing; instead, we simply need to let the machine learning algorithm observe what humans did when grading those hundreds of answers to a single prompt.
Take, as an example, the word “homologous.” Is an essay better if it uses this word instead of the word “same”? In the general case, no; I dare anyone to collect a random sampling of 1,000 essays and show me a statistical pattern that human graders were more likely to grade a random essay a higher grade if they were to make that swap. It’s simply not how human teachers behave, it won’t show up in statistics, and machine learning won’t learn that behavior.
On the other hand, let’s say an essay is asking a specific, targeted question about the wing structure of birds, and the essay is being used in a college freshman-level course on biology. In this domain, if we were to collect 1,000 essays that have been graded by professors, a pattern is likely to emerge. The word “homologous” will likely occur more often in A papers than C papers, based on the professors’ own grades. Students who use the word “homologous” in place of the word “same” have not singularly demonstrated, with their mastery of vocabulary, that they understand the field; however, it’s one piece of evidence in a larger picture, and it should be weighted accordingly. So, too, will features of syntax and phrasing, all of which will be used as features by a machine learning algorithm. These features will only be given weight in machine learning’s decision-making to the extent that they matched the behavior of human graders. By this specialized process of learning from very targeted datasets, machine learning can emulate human grading behavior.
However, this leads into the biggest problem with the edX story.Myth #2: Automated grading only requires 100 training examples.
Machine learning is hard. Getting it right takes a lot more help at the start than you think. I don’t contact individual teachers about using machine learning in their course, and when a teacher contacts me, I start out my reply my telling them they’re about to be disappointed.
The only time that it benefits you to grade hundreds of examples by hand to train an automated scoring system is when you’re going to have to grade many hundreds more. Machine learning makes no sense in a creative writing context. It makes no sense in a seminar-style course with a handful of students working directly with teachers. However, machine learning has the opportunity to make massive in-roads for large-scale learning; for lecture hall courses where the same assignment is going out to 500 students at a time; for digital media producers who will be giving the same homework to students across the country and internationally, and so on.
It’s dangerous and irresponsible for edX to be claiming that 100 hand-graded examples is all that’s needed for high-performance machine learning. It’s wrong to claim that a single teacher in a classroom might be able to automate their curriculum with no outside help. That’s not only untrue; it will also lead to poor performance, and a bad first impression is going to turn off a lot of people to the entire field.Myth #1. Automated grading gives professors a break
Look at what I’ve just described. Machine learning gives us a computer program that can be given an essay and, with fairly high confidence, make a solid guess at labeling the essay on a predefined scale. That label is based on its observation of hundreds of training examples that were hand-graded by humans, and you can point to specific, concrete features that it used for its decision, like seeing webbed feet in a picture and calling it a duck.
Let’s also say that you can get that level of educated estimation instantly – less than a second – and the cost is the same to an institution whether the system grades your essay once or continues to give a student feedback through ten drafts. How many drafts can a teacher read to help in revision and editing? I assure you, fewer than a tireless and always-available machine learning system.
We shouldn’t be thinking about this technology as replacing teachers. Instead, we should be thinking of all the places where students can use this information before it gets to the point of a final grade. How many teachers only assign essays on tests? How many students get no chance to write in earlier homework, because of how much time it would take to grade; how many are therefore confronted with something they don’t know how to do and haven’t practiced when it comes time to take an exam that matters?
Machine learning is evidence-based assessment. It’s not just producing a label of A, B, or F on an essay; it’s making a refined statistical estimation of every single feature that it pulls out of those texts. If this technology is to be used, then it shouldn’t be treated as a monolithic source of all knowledge; it should be forced to defend its decisions by making its assessment process transparent and informative to students. This technology isn’t replacing teachers; it’s enabling them to get students help, practice, and experience with writing that the education field has never seen before, and without machine learning technology, will never see.Wrapping Up
“Can machine learning grade essays?” is a bad question. We know, statistically, that the algorithms we’ve trained work just as well as teachers for churning out a score on a 5-point scale. We know that occasionally it’ll make mistakes; however, more often than not, what the algorithms learn to do are reproduce the already questionable behavior of humans. If we’re relying on machine learning solely to automate the process of grading, to make it faster and cheaper and enable access, then sure. We can do that.
But think about this. Machine learning can assess students’ work instantly. The output of the system isn’t just a grade; it’s a comprehensive, statistical judgment of every single word, phrase, and sentence in a text. This isn’t an opaque judgment from an overworked TA; this is the result of specific analysis at a fine-grained level of detail that teachers with a red pen on a piece of paper would never be able to give. What if, instead of thinking about how this technology makes education cheaper, we think about how it can make education better? What if we lived in a world where students could get scaffolded, detailed feedback to every sentence that they write, as they’re writing it, and it doesn’t require any additional time from a teacher or a TA?
That’s the world that automated assessment is unlocking. edX made some aggressive claims about expanding accessibility because edX is an aggressive organization focused on expanding accessibility. To think that’s the only thing that this technology is capable of is a mistake. To write the technology off for the audacity of those claims is a mistake.
In my next few blog posts, I’ll be walking through more of how machine learning works, what it can be used for, and what it might look like in a real application. If you think there are specific things that ought to be elaborated on, say so! I’ll happily adjust what I write about to match the curiosities of the people reading.
The post Six Ways the edX Announcement Gets Automated Essay Grading Wrong appeared first on e-Literate.
When the story first broke a while back about the Kaggle contest for robo-grading essays that could be “similar to” human graders, I got interested. So after doing a little reading, I ended up contacting a guy by the name of Elijah Mayfield, a PhD student at Carnegie Mellon University and one of the winners of the contest. The net result of our conversation is that I ended up writing a blog post called “What Is Machine Learning Good For?” Fast-forward a bit. Elijah now has a start-up called LightSIDE Labs based on the same technology, and The New York Times is writing puff pieces on how edX is going to change the world with this technology. In the meantime, I have been talking to Elijah for a while about getting him to write at e-Literate.
I’m happy to say that Elijah will be doing a series of posts for us on machine learning in education, starting today. Please welcome him.
Michael and I had the privilege of leading off the ELI Online Spring Focus Session on MOOCs (taking place today, Apr 3, and tomorrow). Thanks to Stephen Downes, we have a good set of notes on our presentation “Everything You Thought You Knew About MOOCs Could Be Wrong” (program & resources, notes) as well as the other sessions already on each of the sessions on his blog. You know, I think he actually believes this stuff about learner-generated content and self-forming communities . . . In all seriousness, this is a great session by ELI, great notes by Stephen, and it’s instructive to see the various channels such as chat, Twitter, Google+, and blogs tied to the session (most using #elifocus).
There was an interesting presentation from Seth Anderson (Duke), Amy Collier (Stanford), Cassandra Horlii (California Institute of Technology) titled “Designing and Implementing MOOCs that Maximize Student Learning” (program & resources, notes) that gives additional insight into MOOC student types. Quoting from Stephen’s notes:
In most of the MOOCs Stanford has offered, fewer than half the students have come from North America. Nearly half the people in a MOOC may not have a knowledge of English as a first language. They have a more varied educational and cultural background than in traditional courses.
The majority of active users say they’re taking the course for fun or a challenge, rather than a credential. The tendency to judge MOOCs based on completion rates overlooks the reasons why people join a MOOC. The majority engage in sampling behaviour – like the people in this webinar. Many are MOOC auditors, but they don’t engage, and they aren’t motivated by completion records.
This is an important aspect of MOOCs and open courses in general that gets missed by popular media and even by some MOOC faculty. The students have varying goals and behavior within a course, and it is a mistake to assume they should converge on a common goal and set of behaviors.
The categories are described in more in detail in the paper “Deconstructing Disengagement: Analyzing Learner Subpopulations in Massive Open Online Courses”. First, the basic demographics:
Our analysis of learner trajectories is based on three com-puter science courses that vary in their level of sophistication: “Computer Science 101″ covers high school level content (HS-level), “Algorithms: Design and Analysis” covers undergraduate level content (UG-level), and “Probabilistic Graphical Models” is a graduate level course (GS-level). Table 1 provides basic demographic information and summarizes how many learners were active on the course website at any point in time (as opposed to simply enrolling and never participating). In all three courses, the vast majority of active learners are employed full-time, followed by graduate and undergraduate students. Moreover, most learners in the UG-level and GS-level courses come from technology-related industries. The majority of learners in the UG-level course report to hold a Master’s or a Bachelor’s degree. Geographically, most learners are located in the United States, followed by India and Russia.
Then, the categories:
- Completing: Learners who completed the majority of the assignments offered in the class. Though these participants varied in how well they performed on the assessment, they all at least attempted the assignments. This engagement pattern is most similar to a student in a traditional class.
- Auditing: learners who did assessments infrequently if at all and engaged instead by watching video lectures. Students in this cluster followed the course for the majority of its duration. No students in this cluster obtained course credit.
- Disengaging: Learners who did assessments at the beginning of the course but then have a marked decrease in engagement (their engagement patterns look like Completing at the beginning of the course but then the student either disappears from the course entirely or sparsely watches video lectures.) The moments at which the learners disengage differ, but it is generally in the first third of the class.
- Sampling: Learners who watched video lectures for only one or two assessment periods (generally learners in this category watch just a single video). Though many learners “sample” at the beginning of the course, there are many others that briefly explore the material when the class is already fully under way.
This view of participants, based on the schools’ MOOC experience, is somewhat analogous to the five student patterns I previously described, assuming the following mapping:
- Completing = Active Participants
- Auditing = Passive Participants
- Disengaging = Drop-outs or people moving from Active Participant to Passive Participant to Observer
- Sampling = a combination of Observers and Drop-Ins
All in all, this was a very interesting first day of the focus session – I’m looking forward to more of the discussions tomorrow.
The post Insight on MOOC student types from ELI Focus Session appeared first on e-Literate.
Earlier this week I wrote about the new patent awarded to the University of Phoenix (the for-profit institution owned by the Apollo Group) for the activity stream within their new online learning platform. The patent gives us a glimpse into a billion-dollar bet that Phoenix is making on this next-generation LMS that will power their move into adaptive learning.
The University of Phoenix has always been known for using a homegrown LMS, which is understandable given the large size (360,000 students) of the school. In 2009, Phoenix began investing in a completely new learning platform as part of the “Learning Genome Project”. While the company has traditionally been reluctant to describe its internal systems, starting with the 2010 EDUCAUSE conference Phoenix began sharing more information on this project.
The promise of adaptive learning
Steve Kolowich at Inside Higher Ed wrote an article on the new learning platform in October 2010 based on information shared by Phoenix’s Director of Data Innovation.
Where Facebook has shown unique value is as a data-gathering tool. Never has a website been able to learn so much about its users. And that is where higher education should be taking notes, said Angie McQuaig, director of data innovation at the University of Phoenix, at the 2010 Educause conference on Friday.
The trick, she said, is individualization. Facebook lets users customize their experiences with the site by creating profiles and curating the flow of information coming through their “news feeds.” In the same motion, the users volunteer loads of information about themselves. [snip]
This is where the University of Phoenix is headed with its online learning platform. In an effort ambitiously dubbed the “Learning Genome Project,” the for-profit powerhouse says it is building a new learning management system (or LMS) that gets to know each of its 400,000 students [ed. now reduced to 360,000] personally and adapts to accommodate the idiosyncrasies of their “learning DNA.”
The article goes on to describe the Phoenix vision of adaptive learning powered by the learning platform, stating that data analytics is going to kill the standardized curriculum dominant in higher education.
Additional insight was provided in a February 2011 article by Josh Keller in the Chronicle that further described the scope and approach of the learning platform development.
Two years ago, leaders at the University of Phoenix decided that its software for students was outdated. So it hired tech-industry heavyweights from Yahoo and elsewhere, installed a team of more than 100 people here in San Francisco, and gave them free rein to rebuild the college’s online-learning environment from scratch.
The team created a social network that borrows heavily from Facebook. It developed a data platform that collects and analyzes billions of clicks, messages, and interactions among students and their instructors. And it started profiling students’ online behavior to personalize how they are taught. [snip]
When students log in, they see recommended tasks for that day and a personalized discussion feed that resembles one pioneered by Facebook. They can see who else is online and chat with other students and instructors.
One goal is to better help students find the right people among Phoenix’s vast network who are online and could help them learn, says Michael White, Apollo’s chief technology officer. “My faculty member’s not online, but 700 faculty members who teach the same thing are online, so it’s really the power of the network,” he says.
The billion-dollar bet
How serious is Phoenix about this approach? Quite serious, as the university appears to be making a billion-dollar bet on personalized learning directly powered by this new learning platform.
The Chronicle covered the Phoenix announcement from October 2012 that they would close 115 locations, including this mind-boggling statement:
Mr. Brenner [senior vice president for corporate communications and external affairs at the Apollo Group] said Apollo was investing $1-billion in a new online-learning management system.
Not all of the investment is pure development as it includes the 2011 acquisition of Carnegie Learning for $75 million. From the press release:
The acquisitions allow Apollo to accelerate its efforts to incorporate adaptive learning into its academic platform and to provide tools to help raise student achievement in mathematics, which supports improved retention and graduation rates.
‘We are excited to partner with Carnegie Learning, which will allow us to integrate their high quality educational and adaptive learning technology into our platform,’ said Gregory Cappelli, Co-CEO of Apollo Group and Chairman of Apollo Global.”
The full significance of the University of Phoenix bet on adaptive learning platforms goes beyond pure dollars and became clear when the school announced the closure of 115 of its 240 locations. The stated usage of the savings from campus closures is primarily to further invest in the platform as described by the Phoenix Business Journal.
The $300 million in savings will be used to invest more heavily in the company’s online learning platform as well as renovating and modernizing Apollo’s existing 112 locations.
“This decision is in direct response to student demand to what students have told us and demonstrated what they want,” Clark said.
The potential for a new learning platform in the marketplace
This is a massive investment in a next-generation LMS, and there are clear signs that Phoenix does not plan to merely use the system for internal use. In October 2011 the Chronicle reported on Phoenix’s potential plans to sell their services including access to this learning platform. Perhaps the real intent of the patent is to protect intellectual property for a system that they plan to license and sell. From the Chronicle.
Facing new regulations and slowing enrollment for their degree programs, companies like the Apollo Group, parent of the University of Phoenix, are quietly developing or expanding other educational services that they could sell to nonprofit colleges and corporations, moves that could signal the future direction of the for-profit college industry.
Among other things, that means it might not be long before the Apollo Group seeks out other colleges as customers for the electronic learning platform it has spent years and millions of dollars developing. A company spokesman said licensing that platform to other colleges is one of the many options its new Apollo Educational Services division is exploring. Although the entire Phoenix student body won’t be fully on the new platform until spring, Apollo has been inviting higher-education leaders to its San Francisco development center to show off the new system for the past several months.
“We’d love to partner with existing educational institutions. We’d love to partner with global companies,” says Mark Brenner, Apollo’s senior vice president for external affairs.
What we might be seeing soon is the release of a billion-dollar adaptive online learning platform available to other companies and institutions. But what is the reality and does the patent award give indications of the limits of big data in education? I’ll explore those questions in my next post.
The post The Billion-Dollar Bet on an Adaptive Learning Platform appeared first on e-Literate.
The University of Phoenix recently was awarded a patent (#8341148 B1) for an adaptive activity stream related to its online learning platform. From an initial reading of the patent, it appears very broad to me (deja vu all over again). From the press release:
Apollo Group (APOL), the parent company of University of Phoenix®, today announced that it received a United States patent related to its innovative online classroom platform. The patent was awarded for the University’s new Academic Activity Stream that will consolidate student activities, engagement, and interaction into one unified learning space. The stream will showcase unique personal management features that allow students to more efficiently manage their coursework and classroom experience.
This patent is the next step in the “Learning Genome Project” – UoP’s major investment in a next-generation online learning platform. The basic idea of the Academic Activity Stream is to rank information in a user’s activity stream based on individual interests, past history, and learning objectives – rather than merely ranking the items chronologically. From Google’s listing of the patent:
Techniques are described herein for implementing an activity stream. An activity stream includes a ranked list of objects that are associated with each other. Within an activity stream, an object (such as an assignment or course syllabus) may have events associated with it. For example, a student can “comment” on an assignment. The assignment may be listed as an object within the activity stream, and the comment may be posted under the assignment, in the activity stream, as an event that is associated with the assignment. A variety of objects can appear in an activity stream, and each object may have comments and other events listed underneath.
The location of an object in the activity stream changes based on events that happen in association with objects in the stream. However, rather than simply being pushed further down the list every time a new object is added to the activity stream, techniques are provided for moving objects within the activity stream in other ways.
The specific patent claim #1 (most other claims refer to claim #1):
A method, comprising: generating a first ranked list of objects for an activity stream; in response to detecting a plurality of events associated with a plurality of objects, placing each of the plurality of objects in positions in the first ranked list based on the order in which the events occurred; in response to detecting a first event associated with a first object in the first ranked list of objects, moving the first object in a first position in the first ranked list, wherein the first event is associated with user activity and the first object is associated with a class; in response to detecting a second event associated with a second object in the first ranked list of objects: moving the first object to a second position in the first ranked list, wherein the second position is lower than the first position in the first ranked list; placing the second object in the first position in the first ranked list; wherein the second event occurs after the first event; maintaining the ranked list as a plurality of segments, wherein each segment is associated with a time period; maintaining a first segment that is associated with a first time period; after detecting the expiration of the first time period: causing the portion of the ranking of objects that is associated with first segment to remain static; maintaining a second segment that is associated with a second time period; dynamically updating the portion of the ranking of objects that is associated with the second segment in response to detecting a fourth event without updating the portion of the ranking of objects that is associated with the first segment; wherein the method is performed by one or more computing devices.
The patent lists several “embodiments” of the concept – examples of approaches that could be pursued to implement the activity stream. These embodiments include re-ranking of a book chapter based on recent student comments or preferences and notifications when 75% of students have completed an assigned reading.
Figure 2 shows an example user interface (very poor quality):
I haven’t figure out why the patent includes descriptions of hardware-based computing devices as an embodiment of the concept.
What is concerning is that this patent appears to be quite broad in its claims, bringing up the painful memories of the Blackboard patent from 2006 that was eventually invalidated. Am I reading this correctly in that it essentially patents any individualized stream within a learning platform? More to come.
Discussions here or in the Google+ post.
The post University of Phoenix Patents Adaptive Activity Stream for Its Learning Platform appeared first on e-Literate.
Phil and I will be giving a webinar with the same title as this blog post for EDUCAUSE ELI on Monday, February 11th at 1 PM ET. It’s aimed at folks on campuses, especially Presidents, Provosts, and other academic decision-makers, who weren’t necessarily focused on online learning in the way that they are now that MOOCs have gotten their attention. We’re going to try to position MOOCs in the larger landscape of online learning and talk a little bit about how campuses can think about the various options in the context of their institutions’ respective missions and strategic goals. It’s a lot to try to accomplish in an hour, but I think we can give people a basic framework and a few important questions to ask themselves.
The post Beyond the MOOC Hype: Getting Serious about Online Learning appeared first on e-Literate.
Here’s a nifty video summary of a doctoral dissertation by Derek Muller that a client pointed out to me:
The basic gist is that students have pre-conceived notions that are wrong, and it is very hard to dislodge those mistaken notions. If you show them a video with an accurate explanation, the students will say that the video was clear and helpful, but they will misremember it as confirming their (mistaken) preconceived notions. In short, they won’t learn. In contrast, if you show them a video that starts by directly stating and then refuting their misconception, they like the video less and say it is confusing, but they actually learn more. This is a really important pedagogical point to know whether you are giving traditional in-class lectures, writing curricular materials, or creating one of those oh-so-modern video lectures that all the cool kids are into these days.
It’s also a good example of the kind of insight that big data is completely blind to. And it gives us good reason to be skeptical that taking large lecture courses online, turning them into REALLY large lecture courses (with nice videos), and expecting that new and more effective pedagogies will rise out of the data because, you know, science or something, is more of a hope (or a fantasy) than a plan to improve education.
Let’s say you have one of those ultra-hip MOOC platforms with a bazillion courses running on it and a hadoop thingamabob back end that’s tied to a flux capacitor, an oscillating overthruster, and a machine that goes “ping!” You’ve got all the big data toys. And let’s say that, among the many thousands of lecture videos being used on your platform, a bunch of them are designed the way Muller’s work suggests is best practice. Some of these were done this way consciously with awareness of the research. Some were done this way on purpose but based on intuitions by classroom teachers. They don’t have a name for what they’re doing, and they don’t really think about it as a general pedagogical strategy, but they have learned from experience that there are certain spots in their courses where they have to confront some misconceptions head-on. And then some of the videos may be in the Muller format completely accidentally. For example, maybe there’s a video of students working through a problem together. The first idea they come up with is the misconception, but they talk it through together and come up with the right answer in the end. This wasn’t planned, and the teacher who posts the video may not even be aware of why this sequence of events makes the event effective. Maybe she believes in the value of watching students work through the problem together and posts lots of student conversations videos, some of which end up being in Muller’s format and some of which don’t. Let’s assume that many of these videos are effective at teaching the concepts they are trying to teach, and let’s also assume that they are effective for the reason that Muller hypothesizes.
The first question is whether our super-duper, trans-warp-capable, dilithium crystal-powered big data cluster would even identify these videos as noteworthy. The answer is maybe, but probably not reliably so. Muller set up a controlled experiment with one variable designed to test a well-formed hypothesis. He was measuring whether this style video was more effective than the alternative of a more traditional lecture delivery. In science, this is called a “control of variables strategy.” In product development, it’s called “A/B testing” or “split testing.”
Big data usually doesn’t work that way. Instead of creating a tightly controlled set of conditions, it usually looks at what’s available “in the wild” and relies on the massive numbers of examples it has plus the power of computers to do lots of comparisons really fast to come up with inferences. Let’s say, for example, that you’re a medical researcher trying to figure out the role of genetics in a particular type of cancer. There are many, many genes that could be involved, and it may be that a bunch of them are involved but interact in complex ways. And, of course, environmental factors such as diet or exposure to carcinogens, as well as a certain amount of chance, can all impact whether a particular individual gets cancer. The good news is that, while there are many variables, they are finite in number, mostly known and measurable, and mostly have a quantifiable and reasonably regular impact on the cancer outcome (if you understand all the interactions sufficiently well). If you have a large enough database of patients with enough genetic material and good details on the non-genetic factors that you think probably contribute to the likelihood that they will get cancer, then a big data approach will probably help. There are regular patterns in the data. The main challenge is sifting through the mountains of data to find the patterns that are already there. Big data is good for that kind of problem.
But education doesn’t work that way. The same video may impact different students very differently, due to variables that mostly aren’t in our computer systems. For one thing, classes can be taught differently in many, many different ways, some of which matter and some of which don’t. Again, if we were doing a split test in a MOOC context, we could control the variables what happens when you just change one video for a class that is otherwise the same for many students. That approach has significant research value, but it’s not big data magic. It’s educators who come up with hypotheses and test them using a large data set. Students are also very different, in important ways that often don’t show up the data that we have in our online systems. Silicon Valley is not going to make us magically smarter about teaching.
Now, big data enthusiasts might argue that I’m not thinking big enough in terms of the data set, and that could make a difference. Knewton, for example, claims that their system can track students across courses and semesters and test hypotheses about them over time. For example, suppose a student is struggling with word problems in a math class. It’s possible that the student is having difficulty translating English into math variables, or trouble identifying the important variables in the first place. Those are both math-related issues. But it’s also possible that the student just has poor English decoding skills in general. Knewton claims that their system can hold all of these hypotheses about the student and then test them (presumably using some sort of Baysian analysis) across all the courses. If there is evidence in the English class that the student is struggling with basic reading, then that hypothesis gets elevated. And maybe that student gets extra reading lessons slipped in between math lessons. It sounds really cool. I haven’t seen evidence that it actually works yet, and to the degree that it does, it raises other questions about whether you need all student educational interactions to be on the platform in order to get the value, who owns the data, and so on. Put this one in the “maybe someday” category for now.
But even granting that you can get sufficiently rich information about the students, there’s another hard problem. Let’s say that, thanks to the upgrade in your big data infinite improbability drive made possible by your new Spacely’s space sprocket, your system is able to flag at least a critical mass of videos taught in the Mueller method as having a bigger educational impact on the students the average educational video by some measure you have identified. Would the machine be able to infer that these videos belong in a common category in terms of the reason for their effectiveness? Would it be able to figure out what Muller did? There are lots of reasons why a video might be more effective than average. And many of those ways are internal to the narrative structure of the video. The machine only knows things like the format of the video, the length, what kind of class it’s in, who the creator is, when it was made, and so on. Other than the external characteristics of the video file, it mostly knows what we tell it about the contents. It has no way for it to inspect the video and deduce that a particular presentation strategy is being used. We are nowhere close to having a machine that is smart enough to do what Muller did and identify a pattern in the narrative of the speaker. Now, if an educational researcher were to read Muller’s research, tag a critical mass of the relevant videos in the system as being in this style, and ask the machine to find other videos that might be similar, it’s possible that big data could help. It might come back with something like, “Here are some videos that seem to have roughly the same kind and size of effect on test scores as the ones with the Muller tag.” Maybe. Even then, you’d have to have human researchers go through the videos the computer flagged—and there might be a lot of them—to see which ones really use the same strategy and which ones don’t. That would be better than nothing, but it’s far from magic.
By the way, the low-tech method commonly used now is even worse. Not only is it useless, it’s actually harmful. A/B tests are rarely done on curricular materials, but surveys and focus groups where students self-report the effectiveness of the materials are common, particularly among textbook publishers. And in that situation, the videos that the students report to be harder and more confusing would actually be the more effective ones. But, lacking any measure other than the survey of their real effect on learning, the publishers (or teachers) generally would toss out the more effective videos in favor of the less effective ones.
Whether we’re talking about machine learning or human learning about how to improve education, the real problem is that we don’t have a vocabulary to talk about these teaching strategies, so we can’t formulate, test, and independently verify our hypotheses. In the machine learning example, we could create an arbitrary “Muller” tag in the system, but we don’t have a common language among teachers where we say “Oh, yeah, he’s using the confront-the-misconceptions (CTM) lecture strategy for that one. I prefer doing a predict-observe-explain (POE) experiment to accomplish the same thing.” If we had a widely adopted language that describes the details of why instructors think a particular aspect of their lecture or their discussion prompt or their experiment assignment is effective at teaching, then big data could be helpful because we could tag all our videos with pedagogical descriptions. We could make our theories about teaching and learning visible to the system in a way that it would be more able to test. And, perhaps even more importantly, human researchers could be more effective at collaborating with each other on testing theories of teaching and learning. Right now, what we’re trying to do is a little like trying to conduct physics research before somebody has invented calculus. You can do some things around the edges, but you can’t describe the really important hypotheses about causes and effects in learning situations with any precision. And if you can’t describe them with precision, then you can’t test them, and you certainly can’t get a machine to understand them.
More on this in a future post.
One of the fastest growing educational delivery models over the past year is the school-as-a-service concept, where companies like Pearson, 2U, Academic Partnerships and Deltak provide the services needed for a traditional institution to create an online program at scale. As I have often pointed out, traditional institutions have a organizational designs and cultures that often prevent them from successfully creating self-sustaining online programs, which is the reason for the barrier in the landscape diagram. School-as-a-service model provides a bridge over that barrier.
Of course, the model that has grown even faster are MOOCs. Yesterday Academic Partnerships launched a new concept called MOOC2Degree that attempts to combine these two models, thus giving working adults (the sweet spot of their market) a lower cost, easier method to get credits in an online program.
The most obvious aspect of MOOC2Degree is highlighted in the name – providing a pathway for MOOCs to help lead to a degree. From the press release:
Through this new initiative, the initial course in select online degree programs will be converted into a MOOC. Each MOOC will be the same course with the same academic content, taught by the same instructors, as currently offered degree programs at participating universities. Students who successfully complete a MOOC2Degree course earn academic credits toward a degree, based upon criteria established by participating universities.
Some of the early participants in Academic Partnerships’ MOOC2Degree initiative include: Arizona State University, Cleveland State University, Florida International University, Lamar University, University of Arkansas System, University of Cincinnati, University of Texas at Arlington College of Nursing, University of West Florida and Utah State University. Additional universities are joining the initiative in the months ahead as they work through their processes for providing MOOCs.
This announcement is somewhat similar to the Semester Online program announced by 2U (how long do we need to point out this is formerly 2tor?) in November. In that program 10 partner institutions offer open online courses for credit, although they don’t consider the courses technically to be MOOCs. One significant difference is that 2U targets elite universities for specific domains, whereas Academic Partnerships has a broader focus, primarily targeting public colleges and universities, regardless of status.
Here are some initial thoughts on MOOC2Degree:
Betting on a megatrend
In an phone interview, Randy Best, founder and chairman of Academic Partnerships, said that the real megatrend is not the emergence of MOOCs, but rather the move to universal, affordable access to education. This populist view runs contrary to 2U, Coursera, Udacity and edX, all of which target elite universities, betting that their brands and faculty are important to attract large numbers of students.
I, for one, am sympathetic to this view, as I indicated to Josh Kim in his recent set of prediction interviews:
Despite xMOOCs targeting ‘elite’ higher ed, it will be non-elite institutions that aggressively adopt the model and define the 2nd generation of MOOCs.
Converting courses to MOOCs
How will the partner institutions convert their courses to MOOC courses? The first issue is that each school chooses how to offer their MOOC, and many will offer them on the same LMS already in use. While this vendor-neutral approach has its benefits, I could see a problem if the school’s LMS is not set up to be a MOOC platform. The platform has to scale quickly if the courses grow in size to thousands of students. Many self-hosted LMS solutions are not capable of scaling in this way, nor are simple managed hosting solutions that have dedicated hardware per institution.
The second issue is that MOOCs need to be designed to be easy to get into the course content and interactions as easily as possible. A clunky course design as well as a clunky LMS design will run counter to this need.
The third issue is instructional design, as any interactions and activities need to be able to handle large numbers of students with unpredictable participation. Who is providing the expertise and instructional design advice to ensure that each course is truly ready to be a MOOC? I assume that Academic Partnerships is playing this role, but I am not sure.
Randy Best indicated that two methods to be used by the partner institutions is to alter the start dates as necessary, allowing the course to begin a month early to work out logistics, for example. The institutions could also throttle enrollment and keep to a manageable size. Another approach to handling this issue can be seen in another recent Academic Partnerships announcement referenced in the press release.
Due to the partnership between Academic Partnerships and Canvas, universities can use the Canvas Open Network System at no cost to offer MOOC2Degree courses.
Convincing partner institutions
I asked Randy Best if he had to strong-arm any schools to get them to try out the concept. After all, many traditionalists view MOOCs as a competitive threat that might harm institutional brands or revenue potential. The answer was interesting, as Randy said that all of the initial schools were already considering how to explore MOOCs, and the MOOC2Degree really provided a workable concept that made sense. This concept potentially gives schools a sustainable model combining the free, open nature of MOOCs with the potential for credit-bearing, tuition-generating online programs. In other words, schools want to get into MOOCs, but were much more eager to do so when the concept made sense.
In my mind, this is another key milestone in the rapid transformation of MOOCs into the next generation – in combination with Instructure’s launch of the Canvas Network, Udacity’s move to MOOC 2.0, and the American Council on Education’s moves to recommend credits for MOOCs.
Update 1/24: For additional coverage:
- Inside Higher Ed: “MOOCs for Credit”
- New York Times: “Public Universities to Offer Free Online Classes for Credit”
- Chronicle of Higher Ed: “Universities Try MOOCs in Bid to Lure Successful Students to Online Programs”
The post Further Evolution of MOOCs with Academic Partnerships and MOOC2Degree Launch appeared first on e-Literate.
The following is a rough transcript of my presentation at the 20 Million Minds conference on January 7th.
Thank you very much, and thanks to everybody for coming to the conference.
We seem to be in a unique situation. I had someone remark to me in the hallway discussions leading up to the event that we have quite a unique group of people here, both in terms of the online educational programs and in terms of statewide system administration and faculty.
Jeff Selingo pointed out very well some of the major forces affecting higher education and affecting where we’re going. A lot of people understand this current situation is not a temporary setback, not a temporary change where we can pull back to the normal once certain things change. We’re in a situation where an entire education ecosystem is changing and putting us into uncharted territory. How higher education changes will be up to a lot of the people in this room, as far as key choices go. How can we use the power of online education to transform traditional institutions and systems? I think this meeting is helping to set up a lot of the discussion on the potential changes.
Online education has been around since 1994 – that is the earliest point where you could truly say the Internet helped deliver postsecondary education. On one hand, that is a very small amount of time in terms of academic history. The model we’ve used for academia has been around for hundreds of years, so to a certain degree online education is quite new, and we don’t fully understand what the impact is going to be.
On the other hand, online education did not start in the past two years – it has a history deeper than that covered in many media outlets. We’ve had a huge amount of interest, particularly in the past two years, about online education at the national and state level that has focused just on the recent news and the recent innovations. It’s going to important for us to understand the broader picture of what online education has to offer – what are the different models available and how can they help address the problems of quality, access and cost that we are discussing today.
For public higher education, as Jeff has pointed out, one in nine students are in California. That’s an enormous impact on the entire country, not just for the state. Any transformation of California public higher education will not come from just one type of online education – we need to be cognizant that this is not a one problem / one solution issue.
Online education is a new media, and we really need to understand what are the different potentials for the various models to transform not just new models of education, but also traditional institutions as well. One of the things we’re hoping to get today is a broader perspective so you can see a lot of the innovations out there and what the potential is. But also it’s going to be very important to get faculty perspectives, student perspectives, administration perspectives, and get some of the key issues out on the table. It’s not going to be just a matter of plugging in a single solution to make it work. For that reason I want to thank 20 Million Minds for putting together this forum, quickly put this conference together to get this discussion going, but I also expect the conversation is just starting and will continue.
The landscape that you’re seeing in the graphic is meant to give a broader perspective of what’s going on in online education, and we need to get away from the duality of simply online education versus traditional education. There are different models at play and they have different qualities. One way to lay out this landscape of models is the dimension of modality. There is a spectrum of modality including face-to-face, hybrid or flipped classroom where face-to-face time is augmented by online actives and content delivery. There’s individual online courses supplementing face-to-face programs. There’s fully online programs, and getting away from the standard cohort-based model, there’s even self-paced programs based on the time and availability of the individual student.
The other dimension is course design, which gets to the core of the academic mission, which gets to how knowledge is conveyed and learned by students. It turns out that how courses are designed is a major determination of why certain models exist. The traditional course design involves a single faculty member designing a specific course, where they design and teach that course. There are certain cases where multiple faculty members design a course, particularly for multi-disciplinary examples. And on the top level there’s a concept that has significant implications, and that’s an instructional design team. It’s not just individual faculty, you have a team that includes instructional designers, multi-media specialists, and even subject matter experts from industry. It’s a team-based approach to designing a course.
The reason there’s a wall here is that culturally, there’s a significant barrier for an institution to move from the traditional mindset and be able to get into this concept of team-based course design. As institutions deal with how to adapt to online education, they need to be aware of this barrier and understand the different options to go over or around or even avoid this barrier.
I’m not going to go into the details of all of the models today, but I would like to highlight a few. As mentioned before, some of the models exist to deal with this barrier of a team-designed course. You certainly have a lot of face-to-face courses which use online components, and we’re going to hear about this from the first panel that includes hybrid or flipped classrooms.
The biggest change over the past two years, I think most people would agree, is the concept of a massive open online course, or MOOC. This is one of the first attempts to take advantage of the power of the Internet in terms of scale and access – and MOOCs have driven a lot of the recent national conversation. One thing that is interesting about MOOCs is how they’re depicted on the bottom side of the barrier. MOOCs actually provides a way to attack scale and access while still working through the model of individual faculty designing a course
Up on the top side of the barrier, often traditional schools have had to create a separate organization to be able to provide online courses at scale. We have UC Online, CSU Online here, some other examples such as Rio Salado College in Arizona, University of Maryland University College. There’s several examples where there’s a separate organization within the overall structure of a traditional institution.
There are also service organizations, often called school-as-a-service, that provide the services that traditional schools are not comfortable doing. The idea is “We’ll help you go online by providing the services that your school is not capable of or does not want to do strategically, and let you focus on the academic and the admissions processes which are critical to your institution”.
One other model I’ll highlight that we’ll hear about is competency-based education. There are different versions of competency-based education, but most are based on self-paced courses. In these models, the design starts with defining the competencies that student needs to master, then giving students the time and the ability for repetition for re-taking the material or the assessments before letting them go to the next level.
There are multiple models out there, and what you’re going to be hearing today is a first-hand perspective from the people who have helped created many of these models. The panels will also get into issues such as what are the key barriers to overcome for California public higher education to leverage the power of online education, not just to try out an interesting pilot, but to diffuse that innovation throughout the system.
For the rest of the day we’re also going to hear from students, administrators, and also faculty – to get their thoughts on what are the big issues we need to tackle if we’re going to maintain or improve the quality of education in California while levering online models.
The post Re:Booting CA Higher Education – Transcript of Phil Hill Presentation appeared first on e-Literate.
While we have occasionally written about college costs and budgets here at e-Literate, it mostly hasn’t been part of our brief as a site that focuses on technology-mediated education. That is changing, in large part because cost and budgets are increasingly becoming the drivers for change, in California and elsewhere. But I’ve been astonished at how little information seems to be readily available and how little analysis is out in the academic press about even the basics of college and university finances work.
For example, let’s talk about how enrolments impact college budgets. Phil wrote a good post and follow-up post about a month ago about the recession-driven bump in college enrolments. His graph tells an interesting story:
Phil was pointing out the very large divergence between enrollment growth and employment growth. Specifically, the gap is four times the size of the gaps in previous recessions. This suggests that recent and graduates and soon-to-be graduates may struggle to find work. This is a thought-provoking piece of analysis which prompted me to do a little research of my own. But the deeper I dug, the more questions I had.
Let’s start with the title of Phil’s post: “This time it’s different.” How different is it? If you look at the recession data from the United States since the Great Depression, the character of the recession looks pretty different. Here’s a graph from the economics blog Calculated Risk:
Obviously, this doesn’t have any enrollment data. But the magnitude of the recession is certainly different. It’s more than twice as deep as the second-deepest recession in the time frame of Phil’s graph. But it’s also much longer. What’s the causal relationship (if any) between recessions and enrollment growth? So far, I haven’t been able to find any analysis of this question. Is there reason to believe that a deeper, longer recession of the kind we just went through could be the primary driver of the enrollment spike we’re seeing now? And if so, what characteristics of the recession are most germane? If you look closely at Phil’s chart, the current divergence really started during the 2001 recession and grew during the 2007 recession. The 2001 recession wasn’t as deep as some of the other recessions on that chart, but it was the second-longest in terms of recovery of the job market (as shown in the Calculated Risk chart). Then again, we don’t see any such correlation after the 1981 recession, which was almost as long as the 2001 recession in terms of job recovery.
Anecdotally, I can tell you that, as the post-2007 job losses ground onward, the community college where my wife worked saw an increase in enrolments of people who lost their jobs and went back to school because they didn’t have any significant hope of getting another job with their qualifications any time soon. But, looking at both graphs here, it seems possible that the character of this recession is anomalous enough that we may not be able to learn much from past patterns. This time really could be different.
Then again, maybe not. If you look at this chart of international recessions from the Oregon Office of Economic Analysis blog, it tells a different story:
From a global perspective, there have been five recessions that were worse than the 2007 United States recession in the past 25 years (from the perspective of depth of impact on the job market and length of job market recovery time). So what do we know about the impact of these recessions on college enrollment? I haven’t been able to find anything so far.
And, of course, it would be simplistic to assume that any relationship between a recession or a dip in the employment numbers and enrollment numbers (assuming there is one) is straightforward. For example, do we see some recessions in which it is more evident that unskilled or semi-skilled jobs are going away permanently than in others? If you’re a factory worker and the last factory within community distance just closed, you might think more seriously about going back to school than if three factories in your area each cut their workforce in half but stayed open, even though latter case would show up on the graphs as a larger net job loss than the former case. What do we know about long-term job shifts as a result of the last two recessions and their impact on enrolments? I haven’t been able to find anything in the reports I’ve read so far.
And what impact does a change in enrollment have on college budgets, anyway? For state schools, it’s complicated. The schools themselves obviously gain tuition for every new student. But in a state system, every student is subsidized. So each new in-state student actually costs the state. That’s why state university systems are often interested in decreasing time to graduation and in reaching students who pay out-of-state tuition through distance learning programs.
Admittedly, I am new to this topic and not an academic, so it would not surprise me if there are studies that answer at least some of these questions. My point is that this kind of analysis is nowhere to be found in the current debates about policy. What has been the impact of the enrollment surge on the budgets of the California systems of higher education, and how long can we expect that surge to last? According to a recent study by the Western Interstate Commission for Higher Education, college enrollment in California is expected to drop significantly by 2019. How should this projection impact the goals for the current drive toward online learning? According to the same study, there will also be a significant increase in the percentage of non-white college students in California in the same period. It seems plausible that this shift could correlate with a socio-economic and educational preparedness shift, just based on what we know about the distribution of non-white Californians in areas that are poorly served by their K12 education systems. Will remediation be a bigger cost challenge for the state in the coming years than it is this year? If so, shouldn’t California be “skating to where the puck is going to be”?
We need some serious data journalism in the area of the economics of education right now.
The following is a rough transcript of Jeff Selingo’s keynote at the 20 Million Minds conference on January 7th. Any errors in transcription are mine, and sections that were garbled are marked by . I’d like to thank Rovy Branon for his recordings of these sessions. I’ll post additional transcripts soon.
We’re gathered here today at one of the great public university institutions, and this past summer, at another great public institution, on the other side of the country a drama played out that revealed the immense pressures affecting colleges and universities as they deal with the financial and historical foundation that is swiftly shifting [under us].
Last June, as many of you know, on a late Sunday afternoon, the University of Virginia announced that its president Teresa Sullivan was stepping down after just two years on the job, citing philosophical differences of opinion with the board. The resignation of this popular leader shocked the campus community and over the next few weeks opposition to the board action mounted. Angry students, faculty and alumni took to Twitter and Facebook. The governor threatened to withhold financial support and said for the board to figure this out of he’d replace them all.
What is interesting about this debate is that in the middle of it, the student newspaper, the Daily Cavalier on the University of Virginia campus, threw a public records request acquiring emails that were sent between board members in the weeks leading up the resignation and the decision. What it revealed was pretty much a lack of cooperation on the firing of Teresa Sullivan. What was interesting was that board members exchanged a series of newspaper articles and columns – from the Wall Street Journal, the New York Times, and the Chronicle – on the pressures changing higher education. One of the them was about elite universities that were offering MOOCs – something that we’ll be talking about today .
The board chair asked why we can’t afford to have MOOCs. The board voted to reinstate Sullivan at Virginia, but the drama that unfolded at Virginia last summer, from what I hear talking to presidents and to board members, is happening at campuses across the country as presidents and boards try to find a sustainable path forward. 
The changes are prompted by what I think is a perfect storm of five financial, political, demographic and technological forces that are battering higher education right now. I want to quickly walk through these this morning. I think they set the stage for a discussion and it shows in stark terms why change is inevitable.
The first one is the sea of red ink. We talk a lot about student loan debt in this country but very little about institutional debt. This is the number, 307 billion dollars, that represents the total amount of debt taken out by institutions. The line graph shows the percentage of that debt that is taken out by public universities in terms of their overall financial resources. One third of all colleges now in the US are in a financial standing that is significantly weaker than they were before the recession, and those colleges are actually on an unsustainable path according to a financial analysis. Another third are at risk of [becoming unsustainable]. Expenses are simply growing much faster than revenue. Net tuition revenue and the cash that institutions have to spend to pay faculty and administrators – do the work day in and day out – that is the tuition revenue after financial aid, is flat or declining at 60 percent of American colleges or universities.
I don’t think I really need to tell you about this one, this is for state in education, not just in California, but all of public higher education. By some measures state taxpayer support for higher education hasn’t risen since 1995, when there were 14 million fewer students in the system CHECK. In 2012 29 states paid less for higher education than we did in 2007. If current trends continue, led by Colorado in 2022, every state will be getting out of the business of higher education by 2059. The trend is going in the wrong direction.
The third force is that much of the growth in higher education in the past decade has been fueled by well-off, well-prepared students, and that well is drying up. This is just one example, this graphic, which is an analysis done for a private residential college in the northeast. And what it looks at is the total number of 18 year-olds in 2009. It started off with 4.3 million students, but when you filtered all those students out – the students who aren’t going to college, the students who have no intention in going to college – those who expressed interest in a 4-year residential college on the east coast and oh, by the way, those that had the money to pay for this private residential college. Out of 4.3 million students, only 996 students filtered out. Dozens of schools are after that small group of students, and they all need to have those students. In some ways it’s much like the efforts of publics in many states to recruit financially and to recruit out-of-state in order to boost their revenue. For those students who are paying their full way.
The University of Oregon and Arizona State University enroll more freshman from California than 6 CalState campuses. I think that at some point the well of these students is going to dry up as everyone competes for a smaller and smaller group of students.
The last two forces that I want to talk about today are perhaps the most important. The first is that the alternatives in higher education are improving. This [slide] is just a sampling of those alternatives. [It isn't just the MOOCs from Stanford in California that are here today, we also have StraighterLine or the competency-based degrees from Western Governors.] This is what is going to allow the future, a little of what we heard in the beginning remarks [from Darrell Steinberg] is allowing students to bundle their degree. A third of students transfer between colleges in their four-year degrees. The idea of going to college at 18 and staying there for four years and graduating is a romantic notion of higher education, but it’s simply not reality. Most students don’t get their higher education that way. Many of them drop out and go back to school later on, a typical student is not an 18-year-old.
Students are less brand loyal than before, and they’re using new pathways to college. The next generation of students coming down to colleges are accustomed use of technology throughout their lives. College leaders in my opinion don’t get this, . I attended, here in LA, last spring at the annual meeting of the American Council of Education, and Sal Khan was the keynote speaker. The night before he was profiled on ’60 Minutes’, and he asked those in the room who had never heard of the Khan Academy. [A fifth] of the hands in the room went up. It was interesting that in that month his lessons reached more than four million people. I think many policy leaders just don’t get the idea of really changing the way. These alternatives – whether it’s Western Governors, which is growing at breakneck pace with a competency-based degree in Washington state or Texas or Indiana, and now the University of Wisconsin System and Northern Arizona University will be offering competency-based degrees next year – all of these alternatives are improving.
Finally, I think the most important fact impacting the future of higher education is what I call ‘value gap’. There is no doubt in the minds of Americans that higher education is worth it, despite of all the talk of ‘don’t go to college’ which we see [articles every week] saying don’t go to college, go start the next Facebook or the next Apple . Survey after survey shows that Americans think higher education is core to the success of their children. Those with a college degree earn much more over their lifetime [than others]. Americans increasingly want to know what they’re getting in return for what they’re spending on higher education.
Two research analysts asked this question last year in a survey of Americans and college presidents, and they asked them to rate the job that the higher education system is doing in providing value for the money spent. 57% of the public said fair or good, but 76% of college presidents said excellent or good. This, to me, is the value gap that’s happening right now in higher education. 
Americans want to know what is the value of going to a specific institution. Increasingly there are tools to help figure this out. Three states now – Tennessee, Arkansas, and Virginia – have released the data that matches graduates of specific colleges to earning data in the state unemployment insurance program. That allows users find the first-year salary and eventually find the five-year salary by colleges, by programs. If you want to know what an engineering graduate or a business graduate makes from George Mason University and compare it to other universities in the state, you can now do that in Virgina. You can now do that in Tennessee, in Arkansas and Virginia, and next quarter you’ll be able to do that in a couple more states. These tools are now at the disposal of consumers, and I think they’re only going to improve in the next couple of years to help students make better choices.
Where do we go from here? Despite all these negative statistics I’ve just cited, I’m actually [excited] about the future of higher ed. Sure, a lot of the institutions that are in real trouble, I think a lot of them are going to merge, or in some cases close. I think top institutions in this country will continue to thrive. The middle is really [a lot] to figure out for the path forward. Right now we all want to be in that top group – there is this race for prestige in higher education that is unsustainable and makes institutions, at the end of the day, look a lot alike. Everybody wants to look like the institution down the road. There’s really no reward for trying to be different, because the US News rankings and other rankings don’t reward being [different].
I believe that the financial pressures facing institutions will force them to make choices that will allow greater student choice – in how courses and credits are strung together, how degrees are earned. Ultimately we’ll arrive at a system that is more efficient and gets more students emerging at the end with degrees at a reasonable cost.
Two of the things that I’ve learned in writing my book that will be coming out in May. One is that we’re done a fairly poor job in matching students and institutions in this country. I talked to a hundred college students in preparation for my book, and I was shocked at how little thought went into how they decided where to go to college. It’s one of the reasons why so many students end up dropping out of college – they’re just matched poorly. The second thing that I discovered in talking to a lot of students, and this was mentioned in the opening remarks, many students don’t know why they’re in college. At the end of high school, we’ve created three pathways for students. One is to go to the military, one is to go to a job where a high school degree is good for the job, or to go to college.
Today’s students think that college is a [convenient warehouse], but they don’t know why they’re there. One of the things is that all of these alternatives to traditional higher ed have the potential to create different pathways to college. Not necessarily every student should go to college at some point in their life, whether they’re 20, 25 or 30. I think that the alternatives have been building, working with traditional higher education to create alternative pathways to college. The idea in the US is not that right from high school the only choice you really have to get ahead is to go to college.
I’m not saying that [we're unique] – trust me, I’ve been in the publishing industry for nearly 20 years, and most of that time in a state of turmoil – but I think this is going to be a great ride, and I look forward to the discussions today, and I appreciate the time that Dean gave me to open up. Thank you very much.
The post Re:Booting CA Higher Education – Transcript of Jeff Selingo Keynote appeared first on e-Literate.
I have a new post up at WCET based on some observations from the recent 20 Million Minds conference.
In past years the primary role of state government was to take the lead on funding while working with statewide systems on enrollment policies to serve workforce and general educational needs. Last week in California we witnessed the state government, both from the governor’s office and the legislature, become the driving force of change for determining the role of educational technology and online education to transform the public systems. State officials are no longer content with encouraging and hoping that postsecondary institutions will develop a strategy for systemic change on their own.
Three interdependent events are emerging that clarify this trend.
Read the whole article here.
The post New Post at WCET On Activist Role of State Government in Higher Education appeared first on e-Literate.
Governor Brown appears to be focusing his attention on “bottleneck courses”:
As part of the additional $125.1 million in proposed state funds, $10 million has been directed in the Governor’s budget for online strategies to get more students through so-called “bottleneck” courses. These are courses across the system that cause many students to slow their time to degree until they can find a “seat” in that particular course. They are either lower-division general education requirements, pre-requisites for majors or high demand classes. The directed funds would be used for a multi-pronged approach incorporating technology-enhanced learning, student advising and course redesign to ensure student success. Together, all of these efforts are expected to provide thousands of students more access to classes and help them progress to degree.
They will also save the state money. In a state system, every student is subsidized. Reducing time to graduation also reduces budget burdens. So it’s good for everyone. This is a smart, targeted approach that could also form the basis for experimentation and innovation with ed tech. Note the reference to “course redesign.”