Sunday, November 3, 2013

NSF's Broader Impacts Criteria

I attended a panel of colleagues, including a former member of the National Science Board (NSB) and three recent NSF review panelists, who discussed the revised broader impacts directives from NSF. It was an excellent panel, with great ideas, important insights, and pointers to helpful campus resources. In the latter half of this post, I offer some qualifications to some of the more nuanced comments, stemming from my experience at NSF (http://www.cccblog.org/2011/08/24/first-person-life-as-a-nsf-program-director/), but on the whole I was really struck by how much I resonated with what was said. I'll start by elaborating on these broad points of agreement, first talking about broader impacts as societal implications of the research, then broader impacts through formal and informal educational mechanisms.

 

Broader Impacts as Societal Implications


In my mind, the most important change to NSF's guidelines (http://www.nsf.gov/pubs/policydocs/pappguide/nsf13001/gpg_3.jsp#IIIA2 ) is that broader impacts (BIs) are to be evaluated by the same factors as intellectual merit (IM):
1. What is the potential for the proposed activity to:
    a. Advance knowledge and understanding within its own field or across different fields (Intellectual Merit); and
    b. Benefit society or advance desired societal outcomes (Broader Impacts)?
2. To what extent do the proposed activities suggest and explore creative, original, or potentially transformative concepts?
3. Is the plan for carrying out the proposed activities well-reasoned, well-organized, and based on a sound rationale? Does the plan incorporate a mechanism to assess success?
4. How well qualified is the individual, team, or organization to conduct the proposed activities?
5. Are there adequate resources available to the PI (either at the home organization or through collaborations) to carry out the proposed activities?
There are additional requirements for proposal content to ensure that sound assessments along all 5 factors of BI can be made, and of course, some of these 5 factors are newly applied explicitly to broader impacts.

The importance of "institutionalizing broader impacts" was made early by the panel moderator. A university that has advancing broader impacts in its bones encourages everyone to leverage and grow institutional resources, and creates a collective intelligence that isn't myopic about societal implications of science and engineering, even if individual scientists are often myopic.

When I was at NSF, most proposals didn't elaborate much on broader impacts. I think most PIs take on faith that their research will have broader societal significance, and don't feel the need or ability to elaborate beyond a phrase to a paragraph. Often I sympathize. For example, the PI working on a new computer programming language might feel that the broader impacts of that work are coextensive with all that is touched by computer programming! I am guessing that mathematicians and theoretical physicists are of the same mind -- that the broader impacts are so pervasive and sufficiently distant that its almost impossible to reason about and express. But particularly in the computing and engineering disciplines, someone should be thinking about the societal implications, because they won't all be positive.

Here are several more thoughts.

1) When a program director (PD) and/or a panel sees a proposal that elaborates intelligently on broader impacts, it really makes a proposal stand out from the rest. Occasionally, I've heard comments like "I have never weighted broader impacts so highly" from a panelist. A PD hears that and it makes a difference in the PD's recommendations for funding. For example, research in novel variations on mathematical and computational optimization that will be applied to ecological problems (e.g., design of wildlife reserves) or health problems (e.g., kidney exchange arrangements) are examples that would stand out, could be verified, and would be vehicles for describing the science and its motivation to the public, to include Congress -- a big plus, and one that I believe in.

2) One thing that I have never seen is an NSF proposal that considers the possibility of negative societal impact (together, we hope, with societal benefits too) -- for example, that increasing energy efficiency of a class of devices will cause those devices to be used more, and therefore the collective energy footprint of those devices worldwide will increase. If I ever did see such a proposal, coupled with some plan to guard against it or just test for it, I'd really be impressed, and I think it might impress (some) panelists too. As a PD, and more recently as a panelist on interdisciplinary proposals (e.g., Science, Engineering, and Education for Sustainability), I've seen good reasons to bring in social and behavioral scientists to an otherwise technical proposal because there are implications, often negative, to how humans interact with a technology.

3) In the case of multidisciplinary proposals, what is BI to one field might be IM to another, and vice versa. For example, a computer scientist working with an ecologist might propose to create a new sensor (IM to computing) that would enable better environmental data collection and analysis (BI for computing, and IM for ecology), and therefore better management of resources (e.g., water) for communities (BI for both computing and ecological science -- 1st-order BI for ecology, 2nd-order BI for computing). I find that this observation about the discipline-specific nature of IM and BI is generally new to PIs, and helpful in their starting to think about the IM and BI of a multidisciplinary proposal.
     Interdisciplinary teams generally can mitigate myopia (e.g., in the example above, consider how the "2nd-order" BI for computing can be traced through work with ecology, and these higher order BI effects can be negative as well as positive). Good for NSF for encouraging such proposals through funding programs! I think universities can do a better job of mitigating scientific and engineering myopia through interdisciplinary teaming, and this is NOT usually part of what many universities mean by "institutionalizing broader impacts".

4) Different divisions and programs of NSF view BI differently. The foundational areas (e.g., computer programming languages, computer hardware) are (almost by definition) farther from the broader societal impacts of the research -- after all, they are at the "foundation"! The PDs in the CISE Division of Computing and Communications Foundations will tell the PDs in the CISE Division of Information and Intelligent Systems "You ARE our broader impacts!!!" In the foundational divisions generally, dissemination mechanisms (e.g., workshops, published papers, etc) and education initiatives may dominate the discussion on broader impacts. This came out in the BI panel. I think that these differences will continue, at least I hope that they do, though I also hope we find mechanisms that allow scientists and public alike to appreciate the implications (i.e., 1st-order, 2nd order, higher order still) of foundational research to societal impact. This generally happens through anecdotal stories (e.g., the creation of the Internet, fertilizer that enables feeding the world, …), which is good, but many at NSF would like better longitudinal tools for visualizing impact of NSF's investments, through citation tracking and technology transfer, for example.

In my experience, measuring societal impact generally is not the focus of attempts to institutionalize broader impacts through "evaluation shops" and the like, except at the Center level -- but it can be.

Education, Outreach, and Diversity

 

Good mechanisms for broadening impact is through formal and informal education, where I would call much of what we call informal education to be "outreach". With respect to education components of BI, whether elaborated or not, most proposals I saw at NSF didn't aspire to broader impacts that went beyond the funding period. These proposals essentially proposed to do something worthy, but local, both regionally and temporally. Again, when you see ambition to institutionalize educational innovations so that they persist beyond the funding period and beyond the PI's immediate network, it really stands out. Here is where much of the emphasis on "institutionalizing broader impacts" (Google it!) can be found (Vanderbilt, OSU, Missouri, Stanford, etc). At Vanderbilt, the Center for Science Outreach (VCSO: http://www.scienceoutreach.org/) is giving PIs mechanisms for broadening the impact of their science through formal and informal education mechanisms. I expect that the Vanderbilt Institute for Digital Learning (http://www.vanderbilt.edu/vidl/) will work with VCSO and other groups for non-STEM, to further institutionalize broader impacts, insuring that positive BIs persist and grow.

The former NSB member highlighted the importance of evaluating BIs, just as PIs are expected to evaluate IM (see factor 3 above). This is fantastic -- I can't remember seeing a scientific evaluation plan for BI activities in proposals, except for large Centers where NSF required that an "independent" evaluation team for the BI aspects to be appointed. While NSF has been pushing on BIs for a long time, making BIs "first class" along with IM, is overdue.

I came back from NSF believing in the importance of institutionalizing broader impacts; there should be dedicated funds for BI (see http://www.vanderbilt.edu/provost/cms/files/Broader-Impacts-2-0.pdf) and particularly for medium and large proposals, there should be a co-PI who is explicitly named as the BI lead (my opinion); and some funds set aside to support communicating science and technology to the public too, because I haven't seen this latter activity explicitly called out. Apropos this last point, I spent late nights rewriting a fair number of award abstracts so that there was some chance that the research and the motivations for the research would be understood at some meaningful level by a larger public, including congressional staffers. While there were some notable exceptions, most PIs seem to think that they could let the proposal project summary serve as the award abstract -- sheesh! That summary might be a good starting point, but iteration is necessary to make it publicly accessible.

When I returned from NSF I learned about Vanderbilt's  Communication of Science & Technology major (http://www.vanderbilt.edu/cst/major.html); Vanderbilt must be (close to) unique in the nation in having such a major (good for Vanderbilt!), and it can be the basis for institutionalizing these kinds of broader impacts. Also, there can and should be a better connection made between the communications' teams at universities and schools with NSF, other agencies, and foundations. When I was at NSF, I can't remember ever getting award highlights from the professional science news writers who I know are writing for universities and schools -- why not?! Rather, again I had to iterate with PIs to get research award highlights that were informative and accessible to the public. In most cases, getting such highlights from PIs was like pulling teeth -- ugh!  Some probably don't value highlights much, while others would like to contribute, but they are busy too.  These highlights will be read by congressional staffers, and they need to be good, rather than some annoyance.

Related to the education components of BI, are diversity concerns, ranging from diversity of the research team, particularly on Center-level proposals, to diversity in future generations of scientists and engineers. Again, on Center level proposals there will be special accommodations to ensure that diversity and change in diversity over time is evaluated. But as with (other) education components, there was often little ambition and creativity in attention to diversity. Its not that broadening participation isn't an intellectually interesting area of study (e.g., see http://www.nsf.gov/pubs/2012/nsf12037/nsf12037.jsp), its that few PIs are thinking about it in those terms and so you read silly things, almost disrespectful in my mind, like listing the race and gender of selected members of the research team as the sole attention to broadening participation. In some cases you get the impression that the PI has put about 10 minutes of creative thought into broadening participation, and broader impacts more generally. Again, what are the ambitions for initiatives that move beyond the PI's institution and that will persist and grow after the funding period ends? Institutionalizing broadening participation concerns is germane here too.

Behind the Scenes


There was talk about "why don't PDs do this or that", "NSF should do this". There were some things that were said on the BI panel that aren't wrong per se, but some important factors don't seem to be appreciated. 

One of the most important things I learned at NSF was that there is substantive noise, and different sources of noise, in the process of vetting proposals; I don't mean that the noise is debilitating and that it compromises the validity of peer review as implemented at NSF, but its easy I think to "overfit" your experience on a panel and think you can prescribe simple fixes. Here are some observations.

1) BIs are historically weighted less than IM. In my experience, panels will judge a proposal worthy of funding or not based on IM, and break ties based on BI. The new guidelines won't guarantee equal weighting of IM and BI (see 4c of http://www.vanderbilt.edu/provost/cms/files/Broader-Impacts-2-0.pdf), and I don't think that they should, but I think that the new guidelines will insure that BI is more than a tiebreaker. In some cases, BI might be more heavily weighted than IM, and in a diversified NSF grant portfolio, I think that is perfectly fine. But again, recognize that IM and BI are discipline specific. As an aside, good grantsmanship would suggest that if you are getting declined for an education-heavy proposal in CISE (or MPS or ENG ...) then recast it and submit it to EHR!    

2) Review Panels are usually great at telling a PD which proposals are worthy of funding and which are not worthy of funding. This is already a big win for a PD who has to make decisions on what to recommend. In my experience problems arise when a PD PUSHES a review panel to do what it is not equipped to do. I do not think, for example, that a review panel is in a position to make hard recommendations (e.g., highly competitive versus competitive) based on projected funding levels. That's because the panel does NOT have all the facts in front of it to make such fine-grained recommendations.
         Funding levels are often much less than the percentage of proposals worthy of funding. This can lead a panel to "overfit" the proposals, with great angst over those last few proposals that are being placed in highly competitive versus competitive, and competitive versus not recommended. It's not that overfit will lead to "wrong" decisions or even "wronger" decisions (because most experts will focus on one valid set of characteristics over another valid set), but it can lead to great angst, and it can lead to odd factors for making the final hard calls (like who needs to get to the airport, an advocate or a detractor, of the proposal in question?).

3) One BI panelist said that on an interdisciplinary NSF panel that he/she had served on, 3/4 of the proposals were quickly decided because of IM weaknesses, and the remaining IM-strong proposals were placed in final categories based on BI factors. That sounds consistent with my experience, seems perfectly fine to me, but may seem less than ideal (aka overfitting) to some NSF panelists. Some additional points:
  • (i) The new NSF guidelines may make proposal assessment more holistic (IM AND BI) throughout the paneling process, rather than IM assessment followed by BI assessment. Such a change may lengthen panel time.
  • (ii) The weighting of BI is INCREASED in interdisciplinary settings. What would we otherwise expect an interdisciplinary panel to do??? Paneling interdisciplinary PRE-proposals relies even more heavily on BI factors. Its interesting to me that scientists agree with Congress on the importance of BI, when its not a proposal in their area.
  • (iii) I once suggested to a PI who was not getting a proposal funded through the core program to recast it and submit to an inter-disciplinary, cross-directorate program, specifically to take advantage of the BI bias on interdisciplinary panels. Some might view this as exploiting noise (yes!) and some might say its one mechanism for getting out-of-the box research funded (yes!). The PI's proposal was recommended and funded under the interdisciplinary program; it had also been well regarded by previous core area panels, Competitive or Not recommended for Funding (yes, it can still be a good proposal in this latter case).

4) Not Recommended for Funding is not the same as not worthy of funding or not ready for funding. Again, we invite a panel to increasingly overfit the more we ask them to make finer-grained distinctions. Making finer-grained distinctions is more likely to tweak personal, professional, and scientific biases and constraints. I mean, why should charisma be a factor in making scientific recommendations? More importantly, why is NSF shooting itself in the foot by misrepresenting to the public and to Congress that some large percentage of proposals are NOT "recommended" because many will view this as NOT worthy, but this is NOT the case. At least some proposals that are not recommended for funding are, in the opinion of the panel, worthy of funding! Thereby we misrepresent the under-funding of science -- "but the expert panel said this stuff wasn't worth funding, so why increase funding!"
 
5) It's often the case that there is no consensus on the final, close call recommendations by a panel. This difference of opinion can and should be represented in a Panel Summary. If one or more panelists believe that a proposal should be rated more highly (and in any case), make sure that opinion and the reasons for it are expressed in the Panel Summary and that the PD has heard the argument during discussion (because of what I will say in the next bullet point about PD discretion). In fact a recommendation (HC, C, NRF) by the panel is NOT required (what's the PD going to do ? -- "make you" do something??!! -- no chance, only in your head). In one or two situations I had a panel split down the middle, and no one would budge on an HC vs C (for example), so they described the deadlock, and left the recommendation box unchecked. I had heard what I needed to hear to make a recommendation.

6) In my experience, NSF PDs are relatively quiet during review panels -- and I think that's a good thing. An NSF PD is not a DARPA PD, thank goodness, nor vice versa, thank goodness. NSF PDs have visions for their fields, but their actions are highly modulated by the research community, at least within their core discipline areas (PDs often branch out more when creating and implementing interdisciplinary initiatives that will influence their fields).
     A PD needs information for making recommendations, and while the panel recommendations are the single most important factor in a PD's recommendation, they are far from the only factor. Portfolio balance (where balance does not imply equal cardinality), institutional balance (ditto), PI balances (ditto), balances within the larger programmatic unit (e.g., robotics versus natural language processing versus …), …, AND WHAT THE PD HEARD DURING THE PANEL DISCUSSION. A good PD is a good listener. A good PD will likely speak up from time to time, but not too much. When I have seen what I regard as a PD stepping over the line and being too prescriptive, its been a rotator.
     In some sense it doesn't matter too much if a review panel "overfits" in its recommendations, because while a PD is very influenced by a panel, the PD is NOT tied to it. In fact, arguably the PD is there to compensate for panel overfitting, scientific conservatism, and bias. Its no small thing to decline a Highly Competitive proposal because you think a Competitive proposal should be funded instead (and there are not the funds to do both), and all this needs to be justified IN WRITING, so there is nothing flippant about all this. On rare occasions a Not-Recommended-for-Funding proposal may be funded (because that's not the same as Not Worthy) but that takes considerable justification.
    Thus, you might see a PD remain silent during the panel itself, because the panel is there for the PD to collect information, not about making final decisions. Should a reader advocate, in contrast, that a PD take a "leadership role" on the panel, for example on the importance of BI, recognize that that is a slippery slope. When I opened my mouth, it was most often to ask or answer a question, but yes, I would have to insure that the panel addressed BI to my satisfaction, that they wrote a respectful and informative panel summary, etc.
    That said, I think its a wonderful thing to set aside a session before the panel begins to talk to a panel about issues of intrinsic bias, broader impacts, etc, but once the panel starts, don't start (trying to) direct them TOO MUCH, else you won't know where they will go on their own, informed by the factors that they are in a position to assess, and thus a PD will confound her or his decision making process with the panel's decision making process.
    Would I advocate that we don't push panels to make the fine grained distinctions among those last close calls on the borders of categories (e.g., HC, C, NRF)? Sometimes perhaps, but suffice it to say that having a panel make the fine grained distinctions gets them to talk through the issues thoroughly, and its one mechanism for getting the issues on the table and heard by the PD, even if a PD might come down differently on the close calls than the panel does.
     But alas, there is another reason that PDs and their superordinates may push panels to make those final hard calls! Those final placements into HC, C, NRF are heavy lifting, and if the panel doesn't do it, the PD must. Its not that I think that the PD will do a better or worse job in those final placements (but might use different tie breakers than a panel) -- its that the PD often just doesn't have time. Exercising discretion, when you are (thankfully) obligated to justify it, takes a lot of time, which a PD often doesn't have.

Time, Time, and Time


Lots more I could say here, but let the following general points anticipate suggestions that "NSF" (as if NSF was monolithic) do this or that. Most NSF staff are working very long hours, and this includes a lot of in-the-trenches work. In the CISE (Computing) Directorate, I would sometimes think that if work weeks of more than 50 hours were made illegal, with stiff penalties for violators, there would be a year of extraordinary angst and pain for NSF and academia, followed by consistency, and organizational and programmatic sanity. Its only because of extraordinary hard work by many NSF staff that the whole system doesn't fall apart, but institutional performance is degrading, albeit gracefully. Any increases in funding to NSF generally go to new scientific funding programs, each of which increases overhead, and not towards increases in staffing. After getting back to Vanderbilt, I recall the excitement caused by the Robotics Initiative!!! And it was exciting. But you can bet that the overhead associated with it came out of the hides of NSF staff.

I've heard that NSF talks out of both sides of its mouth on broader impacts or on other issues, or that it drops the ball on this or that. Consistency requires training and that requires time. Going against a panel recommendation (supporting a Competitive proposal because of BI over a Highly competitive proposal) requires justification in writing, which requires time. Reading and pushing PIs for BI updates as well as IM updates requires time. Getting the "best" panelists to peer review proposals requires time, because in CISE at least, PDs will often get an acceptance rate from panel invitations of 20%-30%; I had high rates -- about 60-70% as I recall, because I allowed panelists to "phone in" (http://science-and-government.blogspot.com/2011/08/virtual-panelists-and-thoughts-on.html), but still, designing and recruiting and running a balanced panel takes time. And of course big thinking takes time, be it on designing funding programs along societal dimensions such as sustainability, health, and education; or tech/science dimensions such as robotics, computational game theory, etc.

A major constraint on NSF, or I should say staff within NSF, in responding to suggestions for "this or that" is time, time, and time. In addition to writing NSF, write Congress about funding of Science, and funding of the staff who create, implement, and run the  programs.

Saturday, November 2, 2013

Rotating Program Directors at NSF

I recently commented on a blog post by Jeffrey Mervis on the AAAS Science blog at http://news.sciencemag.org/policy/2013/10/special-report-can-nsf-put-right-spin-rotators-part-1 , which acknowledged the pros of using faculty members from academic institutions as "temporary" or "rotating" program directors at the National Science Foundation (NSF), side by side with permanent Federal staff, but Mr. Mervis' article also points out that monetary savings might be achieved over the present implementation of NSF's rotator program.

I served at NSF as a rotating program director in the Computer & Information Science & Engineering (CISE) Directorate from 2007-2010 and have thoughts on the NSF rotator program. I repeat my comments to Mr. Mervis's article here, but I emphasized in these comments what savings might be most productive and doable; in addition, I think that some of the other recommendations of monetary savings in the Inspector General (IG) report cited in Mr. Mervis' original post seem less achievable or even less desirable -- maybe I will elaborate another day. I also argue that NSF should broaden its perspective on the possible benefits of rotating program directors.

-----

Your post (part I) and the IG’s report paint an accurate, though brief, picture of the IPA program: IPAs (and other staff) work hard and very competently, benefiting science and engineering research and education in the United States, but cost savings are possible. Of the suggested savings, reducing IPA travel back and forth between home institution and NSF would probably be (the most) productive. Frequent (e.g., weekly) travel by an IPA is costly, and it can also disrupt operations in NSF’s team-oriented environment. For IPAs who commit to a life predominantly in the DC area, I hope that NSF continues to pay for their relocation. However, for those who would prefer life predominantly at their home institution, let them telework, probably after an onsite orientation period that is designed to protect NSF esprit de corps. In either case, limit travel back and forth to some sensible number of trips, because 50 IRD trips a year is ridiculous, even if 50 days of IRD is not. This might also put NSF in a better position to negotiate for partial IPA compensation by the institutions of those rotators who stay at home (because the idea that NSF should expect home institutions to partially compensate IPAs who are working extraordinary hours for the government, and that's particularly true of anyone onsite at NSF, seems misplaced). Importantly, these arrangements are easier said than done, at least while preserving the benefits of the IPA program.

While I limited trips to my home institution of Vanderbilt University, I nonetheless ran two “virtual” review panels from my Vanderbilt office, supporting the IG’s contention (and many in NSF’s operational divisions too!) that much can be done through remote communication technology. And now we are getting into a largely underutilized advantage of the IPA program – that IPAs can benefit NSF operations as well as the scientific mission. IPAs are smart, usually very dedicated people who are watching and innovating the operations of NSF. For example, fully 3/4 of the review panelists that I recruited were virtual panelists – they participated by phone or video conferencing, and saved NSF substantial travel costs. My supervisors in the organization, including two IPAs, supported this activity. Other IPAs innovated in similar ways, as well as some members of the permanent staff. If NSF made a commitment to supporting IPAs who had a desire to telework from their home institutions, with protections in place to protect high-quality communication, responsiveness, and NSF esprit de corps, it would go a long way to building a culture in which much larger monetary savings could be realized through the use of virtual panelists (http://www.sciencemag.org/content/331/6013/27.full ), as well as reaping other substantial advantages of virtual panelists (http://science-and-government.blogspot.com/2011/08/virtual-panelists-and-thoughts-on.html )

Apropos the possibility of operational benefits of IPAs, exit interviews of IPAs seemed spotty and certainly not universal when I was there. It strikes me as a terrific lost opportunity if NSF is bringing in talented faculty members, almost all of who have the luxury of speaking their mind because of job security that stems from tenure, and not exit interviewing them and then acting on those interviews!

The IG report also suggests the desirability of a person or office dedicated to evaluating the IPA program on a continuing basis – that is a terrific idea. I have no doubt that ongoing evaluation would affirm the scientific advantages of the IPA program and improve IPA management. In particular, John Conway’s article alludes to the “cultural” differences that often exist between academia and the team-oriented environment of NSF. An IPA-oversight officer who respected and appreciated the IPA mission would presumably help define best practices of IPA orientation, training, and management to effect the transition to the NSF environment, as well as evaluate the program.

Finally, part 2 (http://news.sciencemag.org/people-events/2013/10/special-report-can-nsf-put-right-spin-rotators-part-2 ) of your article highlights a case where an IPA may have been powerless and dismissed summarily. I do not know this case, but five comments seem relevant and responsible: (1) I was proud of NSF’s policies and practices regarding conflicts of interest (COI), and I wish they were standards practiced throughout our Federal government; (2) my experience was that the professional ethics officials at NSF were honorable, highly competent, and responsive to requests for clarification and other help on COI issues; (3) the COI standards are high (thus my pride), but I would regard a case like that outlined as forgivable and correctable in a gentler and more constructive fashion than that described -- I can imagine circumstances in which I might have missed real or perceived COIs too; (4) if there were an officer responsible for assessing the IPA program at NSF, then presumably they would have looked carefully at the actions of all IPAs involved, including supervisors, and made corrective recommendations on IPA training and management at all levels; and (5) the individuals within NSF best placed to speak out on any injustice might well be IPAs, again because of the job security that stems from tenure at their home institutions. That’s not to say that rotators should be watchdogs, but more thought should go into how to use IPAs effectively to inform operations and management, as well as science.