Sunday, November 3, 2013

NSF's Broader Impacts Criteria

I attended a panel of colleagues, including a former member of the National Science Board (NSB) and three recent NSF review panelists, who discussed the revised broader impacts directives from NSF. It was an excellent panel, with great ideas, important insights, and pointers to helpful campus resources. In the latter half of this post, I offer some qualifications to some of the more nuanced comments, stemming from my experience at NSF (http://www.cccblog.org/2011/08/24/first-person-life-as-a-nsf-program-director/), but on the whole I was really struck by how much I resonated with what was said. I'll start by elaborating on these broad points of agreement, first talking about broader impacts as societal implications of the research, then broader impacts through formal and informal educational mechanisms.

 

Broader Impacts as Societal Implications


In my mind, the most important change to NSF's guidelines (http://www.nsf.gov/pubs/policydocs/pappguide/nsf13001/gpg_3.jsp#IIIA2 ) is that broader impacts (BIs) are to be evaluated by the same factors as intellectual merit (IM):
1. What is the potential for the proposed activity to:
    a. Advance knowledge and understanding within its own field or across different fields (Intellectual Merit); and
    b. Benefit society or advance desired societal outcomes (Broader Impacts)?
2. To what extent do the proposed activities suggest and explore creative, original, or potentially transformative concepts?
3. Is the plan for carrying out the proposed activities well-reasoned, well-organized, and based on a sound rationale? Does the plan incorporate a mechanism to assess success?
4. How well qualified is the individual, team, or organization to conduct the proposed activities?
5. Are there adequate resources available to the PI (either at the home organization or through collaborations) to carry out the proposed activities?
There are additional requirements for proposal content to ensure that sound assessments along all 5 factors of BI can be made, and of course, some of these 5 factors are newly applied explicitly to broader impacts.

The importance of "institutionalizing broader impacts" was made early by the panel moderator. A university that has advancing broader impacts in its bones encourages everyone to leverage and grow institutional resources, and creates a collective intelligence that isn't myopic about societal implications of science and engineering, even if individual scientists are often myopic.

When I was at NSF, most proposals didn't elaborate much on broader impacts. I think most PIs take on faith that their research will have broader societal significance, and don't feel the need or ability to elaborate beyond a phrase to a paragraph. Often I sympathize. For example, the PI working on a new computer programming language might feel that the broader impacts of that work are coextensive with all that is touched by computer programming! I am guessing that mathematicians and theoretical physicists are of the same mind -- that the broader impacts are so pervasive and sufficiently distant that its almost impossible to reason about and express. But particularly in the computing and engineering disciplines, someone should be thinking about the societal implications, because they won't all be positive.

Here are several more thoughts.

1) When a program director (PD) and/or a panel sees a proposal that elaborates intelligently on broader impacts, it really makes a proposal stand out from the rest. Occasionally, I've heard comments like "I have never weighted broader impacts so highly" from a panelist. A PD hears that and it makes a difference in the PD's recommendations for funding. For example, research in novel variations on mathematical and computational optimization that will be applied to ecological problems (e.g., design of wildlife reserves) or health problems (e.g., kidney exchange arrangements) are examples that would stand out, could be verified, and would be vehicles for describing the science and its motivation to the public, to include Congress -- a big plus, and one that I believe in.

2) One thing that I have never seen is an NSF proposal that considers the possibility of negative societal impact (together, we hope, with societal benefits too) -- for example, that increasing energy efficiency of a class of devices will cause those devices to be used more, and therefore the collective energy footprint of those devices worldwide will increase. If I ever did see such a proposal, coupled with some plan to guard against it or just test for it, I'd really be impressed, and I think it might impress (some) panelists too. As a PD, and more recently as a panelist on interdisciplinary proposals (e.g., Science, Engineering, and Education for Sustainability), I've seen good reasons to bring in social and behavioral scientists to an otherwise technical proposal because there are implications, often negative, to how humans interact with a technology.

3) In the case of multidisciplinary proposals, what is BI to one field might be IM to another, and vice versa. For example, a computer scientist working with an ecologist might propose to create a new sensor (IM to computing) that would enable better environmental data collection and analysis (BI for computing, and IM for ecology), and therefore better management of resources (e.g., water) for communities (BI for both computing and ecological science -- 1st-order BI for ecology, 2nd-order BI for computing). I find that this observation about the discipline-specific nature of IM and BI is generally new to PIs, and helpful in their starting to think about the IM and BI of a multidisciplinary proposal.
     Interdisciplinary teams generally can mitigate myopia (e.g., in the example above, consider how the "2nd-order" BI for computing can be traced through work with ecology, and these higher order BI effects can be negative as well as positive). Good for NSF for encouraging such proposals through funding programs! I think universities can do a better job of mitigating scientific and engineering myopia through interdisciplinary teaming, and this is NOT usually part of what many universities mean by "institutionalizing broader impacts".

4) Different divisions and programs of NSF view BI differently. The foundational areas (e.g., computer programming languages, computer hardware) are (almost by definition) farther from the broader societal impacts of the research -- after all, they are at the "foundation"! The PDs in the CISE Division of Computing and Communications Foundations will tell the PDs in the CISE Division of Information and Intelligent Systems "You ARE our broader impacts!!!" In the foundational divisions generally, dissemination mechanisms (e.g., workshops, published papers, etc) and education initiatives may dominate the discussion on broader impacts. This came out in the BI panel. I think that these differences will continue, at least I hope that they do, though I also hope we find mechanisms that allow scientists and public alike to appreciate the implications (i.e., 1st-order, 2nd order, higher order still) of foundational research to societal impact. This generally happens through anecdotal stories (e.g., the creation of the Internet, fertilizer that enables feeding the world, …), which is good, but many at NSF would like better longitudinal tools for visualizing impact of NSF's investments, through citation tracking and technology transfer, for example.

In my experience, measuring societal impact generally is not the focus of attempts to institutionalize broader impacts through "evaluation shops" and the like, except at the Center level -- but it can be.

Education, Outreach, and Diversity

 

Good mechanisms for broadening impact is through formal and informal education, where I would call much of what we call informal education to be "outreach". With respect to education components of BI, whether elaborated or not, most proposals I saw at NSF didn't aspire to broader impacts that went beyond the funding period. These proposals essentially proposed to do something worthy, but local, both regionally and temporally. Again, when you see ambition to institutionalize educational innovations so that they persist beyond the funding period and beyond the PI's immediate network, it really stands out. Here is where much of the emphasis on "institutionalizing broader impacts" (Google it!) can be found (Vanderbilt, OSU, Missouri, Stanford, etc). At Vanderbilt, the Center for Science Outreach (VCSO: http://www.scienceoutreach.org/) is giving PIs mechanisms for broadening the impact of their science through formal and informal education mechanisms. I expect that the Vanderbilt Institute for Digital Learning (http://www.vanderbilt.edu/vidl/) will work with VCSO and other groups for non-STEM, to further institutionalize broader impacts, insuring that positive BIs persist and grow.

The former NSB member highlighted the importance of evaluating BIs, just as PIs are expected to evaluate IM (see factor 3 above). This is fantastic -- I can't remember seeing a scientific evaluation plan for BI activities in proposals, except for large Centers where NSF required that an "independent" evaluation team for the BI aspects to be appointed. While NSF has been pushing on BIs for a long time, making BIs "first class" along with IM, is overdue.

I came back from NSF believing in the importance of institutionalizing broader impacts; there should be dedicated funds for BI (see http://www.vanderbilt.edu/provost/cms/files/Broader-Impacts-2-0.pdf) and particularly for medium and large proposals, there should be a co-PI who is explicitly named as the BI lead (my opinion); and some funds set aside to support communicating science and technology to the public too, because I haven't seen this latter activity explicitly called out. Apropos this last point, I spent late nights rewriting a fair number of award abstracts so that there was some chance that the research and the motivations for the research would be understood at some meaningful level by a larger public, including congressional staffers. While there were some notable exceptions, most PIs seem to think that they could let the proposal project summary serve as the award abstract -- sheesh! That summary might be a good starting point, but iteration is necessary to make it publicly accessible.

When I returned from NSF I learned about Vanderbilt's  Communication of Science & Technology major (http://www.vanderbilt.edu/cst/major.html); Vanderbilt must be (close to) unique in the nation in having such a major (good for Vanderbilt!), and it can be the basis for institutionalizing these kinds of broader impacts. Also, there can and should be a better connection made between the communications' teams at universities and schools with NSF, other agencies, and foundations. When I was at NSF, I can't remember ever getting award highlights from the professional science news writers who I know are writing for universities and schools -- why not?! Rather, again I had to iterate with PIs to get research award highlights that were informative and accessible to the public. In most cases, getting such highlights from PIs was like pulling teeth -- ugh!  Some probably don't value highlights much, while others would like to contribute, but they are busy too.  These highlights will be read by congressional staffers, and they need to be good, rather than some annoyance.

Related to the education components of BI, are diversity concerns, ranging from diversity of the research team, particularly on Center-level proposals, to diversity in future generations of scientists and engineers. Again, on Center level proposals there will be special accommodations to ensure that diversity and change in diversity over time is evaluated. But as with (other) education components, there was often little ambition and creativity in attention to diversity. Its not that broadening participation isn't an intellectually interesting area of study (e.g., see http://www.nsf.gov/pubs/2012/nsf12037/nsf12037.jsp), its that few PIs are thinking about it in those terms and so you read silly things, almost disrespectful in my mind, like listing the race and gender of selected members of the research team as the sole attention to broadening participation. In some cases you get the impression that the PI has put about 10 minutes of creative thought into broadening participation, and broader impacts more generally. Again, what are the ambitions for initiatives that move beyond the PI's institution and that will persist and grow after the funding period ends? Institutionalizing broadening participation concerns is germane here too.

Behind the Scenes


There was talk about "why don't PDs do this or that", "NSF should do this". There were some things that were said on the BI panel that aren't wrong per se, but some important factors don't seem to be appreciated. 

One of the most important things I learned at NSF was that there is substantive noise, and different sources of noise, in the process of vetting proposals; I don't mean that the noise is debilitating and that it compromises the validity of peer review as implemented at NSF, but its easy I think to "overfit" your experience on a panel and think you can prescribe simple fixes. Here are some observations.

1) BIs are historically weighted less than IM. In my experience, panels will judge a proposal worthy of funding or not based on IM, and break ties based on BI. The new guidelines won't guarantee equal weighting of IM and BI (see 4c of http://www.vanderbilt.edu/provost/cms/files/Broader-Impacts-2-0.pdf), and I don't think that they should, but I think that the new guidelines will insure that BI is more than a tiebreaker. In some cases, BI might be more heavily weighted than IM, and in a diversified NSF grant portfolio, I think that is perfectly fine. But again, recognize that IM and BI are discipline specific. As an aside, good grantsmanship would suggest that if you are getting declined for an education-heavy proposal in CISE (or MPS or ENG ...) then recast it and submit it to EHR!    

2) Review Panels are usually great at telling a PD which proposals are worthy of funding and which are not worthy of funding. This is already a big win for a PD who has to make decisions on what to recommend. In my experience problems arise when a PD PUSHES a review panel to do what it is not equipped to do. I do not think, for example, that a review panel is in a position to make hard recommendations (e.g., highly competitive versus competitive) based on projected funding levels. That's because the panel does NOT have all the facts in front of it to make such fine-grained recommendations.
         Funding levels are often much less than the percentage of proposals worthy of funding. This can lead a panel to "overfit" the proposals, with great angst over those last few proposals that are being placed in highly competitive versus competitive, and competitive versus not recommended. It's not that overfit will lead to "wrong" decisions or even "wronger" decisions (because most experts will focus on one valid set of characteristics over another valid set), but it can lead to great angst, and it can lead to odd factors for making the final hard calls (like who needs to get to the airport, an advocate or a detractor, of the proposal in question?).

3) One BI panelist said that on an interdisciplinary NSF panel that he/she had served on, 3/4 of the proposals were quickly decided because of IM weaknesses, and the remaining IM-strong proposals were placed in final categories based on BI factors. That sounds consistent with my experience, seems perfectly fine to me, but may seem less than ideal (aka overfitting) to some NSF panelists. Some additional points:
  • (i) The new NSF guidelines may make proposal assessment more holistic (IM AND BI) throughout the paneling process, rather than IM assessment followed by BI assessment. Such a change may lengthen panel time.
  • (ii) The weighting of BI is INCREASED in interdisciplinary settings. What would we otherwise expect an interdisciplinary panel to do??? Paneling interdisciplinary PRE-proposals relies even more heavily on BI factors. Its interesting to me that scientists agree with Congress on the importance of BI, when its not a proposal in their area.
  • (iii) I once suggested to a PI who was not getting a proposal funded through the core program to recast it and submit to an inter-disciplinary, cross-directorate program, specifically to take advantage of the BI bias on interdisciplinary panels. Some might view this as exploiting noise (yes!) and some might say its one mechanism for getting out-of-the box research funded (yes!). The PI's proposal was recommended and funded under the interdisciplinary program; it had also been well regarded by previous core area panels, Competitive or Not recommended for Funding (yes, it can still be a good proposal in this latter case).

4) Not Recommended for Funding is not the same as not worthy of funding or not ready for funding. Again, we invite a panel to increasingly overfit the more we ask them to make finer-grained distinctions. Making finer-grained distinctions is more likely to tweak personal, professional, and scientific biases and constraints. I mean, why should charisma be a factor in making scientific recommendations? More importantly, why is NSF shooting itself in the foot by misrepresenting to the public and to Congress that some large percentage of proposals are NOT "recommended" because many will view this as NOT worthy, but this is NOT the case. At least some proposals that are not recommended for funding are, in the opinion of the panel, worthy of funding! Thereby we misrepresent the under-funding of science -- "but the expert panel said this stuff wasn't worth funding, so why increase funding!"
 
5) It's often the case that there is no consensus on the final, close call recommendations by a panel. This difference of opinion can and should be represented in a Panel Summary. If one or more panelists believe that a proposal should be rated more highly (and in any case), make sure that opinion and the reasons for it are expressed in the Panel Summary and that the PD has heard the argument during discussion (because of what I will say in the next bullet point about PD discretion). In fact a recommendation (HC, C, NRF) by the panel is NOT required (what's the PD going to do ? -- "make you" do something??!! -- no chance, only in your head). In one or two situations I had a panel split down the middle, and no one would budge on an HC vs C (for example), so they described the deadlock, and left the recommendation box unchecked. I had heard what I needed to hear to make a recommendation.

6) In my experience, NSF PDs are relatively quiet during review panels -- and I think that's a good thing. An NSF PD is not a DARPA PD, thank goodness, nor vice versa, thank goodness. NSF PDs have visions for their fields, but their actions are highly modulated by the research community, at least within their core discipline areas (PDs often branch out more when creating and implementing interdisciplinary initiatives that will influence their fields).
     A PD needs information for making recommendations, and while the panel recommendations are the single most important factor in a PD's recommendation, they are far from the only factor. Portfolio balance (where balance does not imply equal cardinality), institutional balance (ditto), PI balances (ditto), balances within the larger programmatic unit (e.g., robotics versus natural language processing versus …), …, AND WHAT THE PD HEARD DURING THE PANEL DISCUSSION. A good PD is a good listener. A good PD will likely speak up from time to time, but not too much. When I have seen what I regard as a PD stepping over the line and being too prescriptive, its been a rotator.
     In some sense it doesn't matter too much if a review panel "overfits" in its recommendations, because while a PD is very influenced by a panel, the PD is NOT tied to it. In fact, arguably the PD is there to compensate for panel overfitting, scientific conservatism, and bias. Its no small thing to decline a Highly Competitive proposal because you think a Competitive proposal should be funded instead (and there are not the funds to do both), and all this needs to be justified IN WRITING, so there is nothing flippant about all this. On rare occasions a Not-Recommended-for-Funding proposal may be funded (because that's not the same as Not Worthy) but that takes considerable justification.
    Thus, you might see a PD remain silent during the panel itself, because the panel is there for the PD to collect information, not about making final decisions. Should a reader advocate, in contrast, that a PD take a "leadership role" on the panel, for example on the importance of BI, recognize that that is a slippery slope. When I opened my mouth, it was most often to ask or answer a question, but yes, I would have to insure that the panel addressed BI to my satisfaction, that they wrote a respectful and informative panel summary, etc.
    That said, I think its a wonderful thing to set aside a session before the panel begins to talk to a panel about issues of intrinsic bias, broader impacts, etc, but once the panel starts, don't start (trying to) direct them TOO MUCH, else you won't know where they will go on their own, informed by the factors that they are in a position to assess, and thus a PD will confound her or his decision making process with the panel's decision making process.
    Would I advocate that we don't push panels to make the fine grained distinctions among those last close calls on the borders of categories (e.g., HC, C, NRF)? Sometimes perhaps, but suffice it to say that having a panel make the fine grained distinctions gets them to talk through the issues thoroughly, and its one mechanism for getting the issues on the table and heard by the PD, even if a PD might come down differently on the close calls than the panel does.
     But alas, there is another reason that PDs and their superordinates may push panels to make those final hard calls! Those final placements into HC, C, NRF are heavy lifting, and if the panel doesn't do it, the PD must. Its not that I think that the PD will do a better or worse job in those final placements (but might use different tie breakers than a panel) -- its that the PD often just doesn't have time. Exercising discretion, when you are (thankfully) obligated to justify it, takes a lot of time, which a PD often doesn't have.

Time, Time, and Time


Lots more I could say here, but let the following general points anticipate suggestions that "NSF" (as if NSF was monolithic) do this or that. Most NSF staff are working very long hours, and this includes a lot of in-the-trenches work. In the CISE (Computing) Directorate, I would sometimes think that if work weeks of more than 50 hours were made illegal, with stiff penalties for violators, there would be a year of extraordinary angst and pain for NSF and academia, followed by consistency, and organizational and programmatic sanity. Its only because of extraordinary hard work by many NSF staff that the whole system doesn't fall apart, but institutional performance is degrading, albeit gracefully. Any increases in funding to NSF generally go to new scientific funding programs, each of which increases overhead, and not towards increases in staffing. After getting back to Vanderbilt, I recall the excitement caused by the Robotics Initiative!!! And it was exciting. But you can bet that the overhead associated with it came out of the hides of NSF staff.

I've heard that NSF talks out of both sides of its mouth on broader impacts or on other issues, or that it drops the ball on this or that. Consistency requires training and that requires time. Going against a panel recommendation (supporting a Competitive proposal because of BI over a Highly competitive proposal) requires justification in writing, which requires time. Reading and pushing PIs for BI updates as well as IM updates requires time. Getting the "best" panelists to peer review proposals requires time, because in CISE at least, PDs will often get an acceptance rate from panel invitations of 20%-30%; I had high rates -- about 60-70% as I recall, because I allowed panelists to "phone in" (http://science-and-government.blogspot.com/2011/08/virtual-panelists-and-thoughts-on.html), but still, designing and recruiting and running a balanced panel takes time. And of course big thinking takes time, be it on designing funding programs along societal dimensions such as sustainability, health, and education; or tech/science dimensions such as robotics, computational game theory, etc.

A major constraint on NSF, or I should say staff within NSF, in responding to suggestions for "this or that" is time, time, and time. In addition to writing NSF, write Congress about funding of Science, and funding of the staff who create, implement, and run the  programs.

Saturday, November 2, 2013

Rotating Program Directors at NSF

I recently commented on a blog post by Jeffrey Mervis on the AAAS Science blog at http://news.sciencemag.org/policy/2013/10/special-report-can-nsf-put-right-spin-rotators-part-1 , which acknowledged the pros of using faculty members from academic institutions as "temporary" or "rotating" program directors at the National Science Foundation (NSF), side by side with permanent Federal staff, but Mr. Mervis' article also points out that monetary savings might be achieved over the present implementation of NSF's rotator program.

I served at NSF as a rotating program director in the Computer & Information Science & Engineering (CISE) Directorate from 2007-2010 and have thoughts on the NSF rotator program. I repeat my comments to Mr. Mervis's article here, but I emphasized in these comments what savings might be most productive and doable; in addition, I think that some of the other recommendations of monetary savings in the Inspector General (IG) report cited in Mr. Mervis' original post seem less achievable or even less desirable -- maybe I will elaborate another day. I also argue that NSF should broaden its perspective on the possible benefits of rotating program directors.

-----

Your post (part I) and the IG’s report paint an accurate, though brief, picture of the IPA program: IPAs (and other staff) work hard and very competently, benefiting science and engineering research and education in the United States, but cost savings are possible. Of the suggested savings, reducing IPA travel back and forth between home institution and NSF would probably be (the most) productive. Frequent (e.g., weekly) travel by an IPA is costly, and it can also disrupt operations in NSF’s team-oriented environment. For IPAs who commit to a life predominantly in the DC area, I hope that NSF continues to pay for their relocation. However, for those who would prefer life predominantly at their home institution, let them telework, probably after an onsite orientation period that is designed to protect NSF esprit de corps. In either case, limit travel back and forth to some sensible number of trips, because 50 IRD trips a year is ridiculous, even if 50 days of IRD is not. This might also put NSF in a better position to negotiate for partial IPA compensation by the institutions of those rotators who stay at home (because the idea that NSF should expect home institutions to partially compensate IPAs who are working extraordinary hours for the government, and that's particularly true of anyone onsite at NSF, seems misplaced). Importantly, these arrangements are easier said than done, at least while preserving the benefits of the IPA program.

While I limited trips to my home institution of Vanderbilt University, I nonetheless ran two “virtual” review panels from my Vanderbilt office, supporting the IG’s contention (and many in NSF’s operational divisions too!) that much can be done through remote communication technology. And now we are getting into a largely underutilized advantage of the IPA program – that IPAs can benefit NSF operations as well as the scientific mission. IPAs are smart, usually very dedicated people who are watching and innovating the operations of NSF. For example, fully 3/4 of the review panelists that I recruited were virtual panelists – they participated by phone or video conferencing, and saved NSF substantial travel costs. My supervisors in the organization, including two IPAs, supported this activity. Other IPAs innovated in similar ways, as well as some members of the permanent staff. If NSF made a commitment to supporting IPAs who had a desire to telework from their home institutions, with protections in place to protect high-quality communication, responsiveness, and NSF esprit de corps, it would go a long way to building a culture in which much larger monetary savings could be realized through the use of virtual panelists (http://www.sciencemag.org/content/331/6013/27.full ), as well as reaping other substantial advantages of virtual panelists (http://science-and-government.blogspot.com/2011/08/virtual-panelists-and-thoughts-on.html )

Apropos the possibility of operational benefits of IPAs, exit interviews of IPAs seemed spotty and certainly not universal when I was there. It strikes me as a terrific lost opportunity if NSF is bringing in talented faculty members, almost all of who have the luxury of speaking their mind because of job security that stems from tenure, and not exit interviewing them and then acting on those interviews!

The IG report also suggests the desirability of a person or office dedicated to evaluating the IPA program on a continuing basis – that is a terrific idea. I have no doubt that ongoing evaluation would affirm the scientific advantages of the IPA program and improve IPA management. In particular, John Conway’s article alludes to the “cultural” differences that often exist between academia and the team-oriented environment of NSF. An IPA-oversight officer who respected and appreciated the IPA mission would presumably help define best practices of IPA orientation, training, and management to effect the transition to the NSF environment, as well as evaluate the program.

Finally, part 2 (http://news.sciencemag.org/people-events/2013/10/special-report-can-nsf-put-right-spin-rotators-part-2 ) of your article highlights a case where an IPA may have been powerless and dismissed summarily. I do not know this case, but five comments seem relevant and responsible: (1) I was proud of NSF’s policies and practices regarding conflicts of interest (COI), and I wish they were standards practiced throughout our Federal government; (2) my experience was that the professional ethics officials at NSF were honorable, highly competent, and responsive to requests for clarification and other help on COI issues; (3) the COI standards are high (thus my pride), but I would regard a case like that outlined as forgivable and correctable in a gentler and more constructive fashion than that described -- I can imagine circumstances in which I might have missed real or perceived COIs too; (4) if there were an officer responsible for assessing the IPA program at NSF, then presumably they would have looked carefully at the actions of all IPAs involved, including supervisors, and made corrective recommendations on IPA training and management at all levels; and (5) the individuals within NSF best placed to speak out on any injustice might well be IPAs, again because of the job security that stems from tenure at their home institutions. That’s not to say that rotators should be watchdogs, but more thought should go into how to use IPAs effectively to inform operations and management, as well as science.

Tuesday, November 6, 2012

"Inside Job", COIs, and Academic Economics


<originally posted August 2011 on earlier blog>

I've been watching the "50 Documentaries you must see before you die" or some such title. "Inside Job" (http://www.insidejob.com/) was one one them -- as the title suggests, its a story of the economic collapse and it includes a very disturbing portrayal of economics in academia (and elsewhere). It's portrayal of economics motivated me to write the Association of American Universities (AAU: http://www.aau.edu/). That organization's "Scientific Enquirer" is a response to the drubbing that science has taken in some quarters and I think its an informative publication, though very new (http://www.aau.edu/research/Science_Enq.aspx?id=12370).

***

Dear AAU,

I recently watched the Academy Award winning (2010) documentary “Inside Job.” In the eleven minutes starting at 1:22:30 in that film, academic economics was portrayed very negatively. The film’s claim that (a) some in academic economics are seriously conflicted with the financial industry and that (b) this conflict contributed to current, severe worldwide economic troubles through long-term academic advocacy of deregulation, is the most disturbing portrayal of academia that I recall seeing. While the film highlights only a few institutions and a few faculty members and university administrators, there is a suggestion that the problem of conflict of interest is systemic in academic economics.

Your Press Release of 02/28/2008 on Conflicts of Interests states:

“According to the report, institutional conflicts of interest are becoming a growing concern as academic institutions assume more complex roles and expand their relationships with industry. Conflicts of interest policies are critical to assure that these essential interactions remain principled and are conducted within a rigorous, transparent, and credible framework.” (p. 2, first link of http://www.aau.edu/policy/COI_policies.aspx?id=10096)

While your recent documentation appears to focus on health care, if there is any truth to the film’s claims, in whole or part, the statement would apply to (segments of) academic economics, arguably with even broader societal significance than healthcare.

I hope that AAU is participating in, if not leading, development of and/or increasing awareness of conflicts policy and training in economics, if in fact you find this to be appropriate. Additionally, if academic economics was significantly misrepresented in the film, even while it showed some academic economists in a positive light as commentators, then this misrepresentation (again, in whole or part) could be appropriate subject matter for commentary in the Scientific Enquirer.

Friday, August 19, 2011

Coburn Report on NSF


(Copied from a July post on my "home" blog)

Last week I responded to my Congressional delegation on Senator Tom Coburn’s report entitled ““The National Science Foundation: Under the Microscope” (http://coburn.senate.gov/public//index.cfm?a=Files.Serve&File_id=2dccf06d-65fe-4087-b58d-b43ff68987fa). I was aware that some scientists whose projects were represented in the report had already responded through blogs and other public forums. Having worked at NSF, however, there was an aspect of the report to which I could respond specifically, and I decided to stick pretty closely to observations that this unique perspective offered. My letter to Senator Lamar Alexander (R-TN) is below; the same letter was also sent to Senator Bob Corker (R-TN) and to Congressman Jim Cooper (D-TN) representing Tennessee’s 5th District.

*******

July 21, 2011
Senator Lamar Alexander
3322 West End Avenue, #120
Nashville, TN 37203

Dear Senator Alexander:

I recently read the entirety of Senator Coburn’s report “The National Science Foundation: Under the Microscope.” You are one of my senators and I am thus writing to you to express significant concerns with the report, focusing on those for which I have something of a unique perspective. From July 2007 through August 2010 I was on leave from Vanderbilt University, serving as a Program Director at the National Science Foundation (NSF) in the Directorate of Computer & Information Science & Engineering (CISE). It was a pleasure and honor to serve at NSF as a member of a hardworking, dedicated team. I regard my time at NSF as service to my country of which I am very proud.

Foremost, I worry that there seems to be an attitude of disrespect for NSF staff permeating the report. For example, the report includes a section “NSF Flying High with First-Class Junkets” (p. 14), with no indication in the section that anyone did this. Indeed, I never flew first class at NSF, or on a junket, nor do I know of colleagues who did. In a second example, when commenting on NSF’s desire to become environmentally friendlier, the report says “Some might find it interesting to note, then, that the NSF currently owns 375 vehicles, including 52 sports utility vehicles” (p. 15). This statement seems flippant and is vague, but is suggestive that NSF staff are hypocritical, not environmentally conscious and/or that the vehicles are not used for science. A third example is the report’s claim that porn surfing was “pervasive” (p. 15), with at least six citations of the same article in The Washington Times. This statement is very wrong. I can believe that such cases, though anomalous, have consumed a large part of the Inspector General’s time in the recent past, since a tiny proportion of a group is often responsible for a large proportion of the angst. We have seen a very recent example in Congress of misbehavior that consumed large amounts of time and energy, but I would not claim that such behavior was rampant in Congress.

My experience at NSF contrasts with the report’s representation of NSF staff activities. My colleagues and I performed many diverse tasks, including the vetting of research proposals with input from other scientific experts, preparing research solicitations, preparing and giving outreach talks to the public, and representing NSF and our country overseas. It is because of administrative and scientific staff dedication that the agency functions as well as it does in spite of very heavy workloads. I worked 60+ hour weeks and this really was pervasive across the Foundation. It appears that even when the President and Congress agree on budget increases for NSF, it is for scientific initiatives that come with yet more overhead and not for the addition of staff to deal with that overhead. I very much encourage you to consider additional funding for staff as well as scientific initiatives.

I believe that the report’s tone will cause many to reject the report entirely, to include points that I think have validity. For example, I generally agree with the report’s statements that a discussion about funding priorities is important. In fact, the very hard discussions I had with my fellow program directors at NSF were on what projects to fund given differing opinions on priorities and our limited budget. Our budget only allowed us to fund about one-third of the projects that had been judged by scientific experts to be most worthy of funding, which in turn was about one-third of all projects submitted; these proportions varied across the agency. The heartbreaking part of my job at NSF was that the majority of projects worthy of funding could not be funded, and my colleagues and I knew the costs for science research in our country and the costs to faculty members and students behind that research. NSF personnel take the job of assessing the scientific qualifications of projects and funding priorities very seriously.

I also agree in general with the report on the importance of metrics and tools to evaluate the payoffs of scientific investments; I believe that NSF staff would welcome such tools with open arms. Data analysis and visualization tools are critically needed to track scientific investments and to evaluate the US funding portfolio within and across agencies. The report highlighted the importance of the STAR Metric initiative, which is aligned with the Science of Science and Innovation Policy (SciSIP) program housed in the Social, Behavioral and Economics (SBE) Directorate of NSF. Thus, I was surprised that the report also recommended eliminating the SBE Directorate. I can’t emphasize enough how wrong I think this would be. I am a computer scientist and daily I witness how computing technology is transforming the ways that humans interact, perceive, decide and learn. The last thing we need is to cut research on understanding human behavior in a time of transformative technology. We should understand, for example, what video games and social networks are doing to our children and all citizens, so that we can design technology to enhance learning and decision-making, not diminish it.

Finally, I fear the report’s tone because it comes across as demeaning public servants whom I know to be dedicated, talented and industrious. I am not sure where degrading stereotypes of ‘government bureaucrats’ originated, but I for one, a lifelong academic, gained an incredible respect for the hard work and brains of federal staff, not just at NSF, but in agencies and departments across government.

Government colleagues can and should offer constructive criticism to one another, but I believe Senator Coburn’s report doesn’t paint an accurate picture of those at NSF who loyally and diligently serve their country. In particular, I wanted to convey my experience to you that NSF is an institution that Americans can be proud of, respected and emulated the world over, with staff who do their best in making difficult decisions on matters of national and scientific importance, despite a limited budget and a heavy workload.

Thank you for your attention and for your service to our country.

Respectfully,
Douglas H. Fisher

**** End letter ****

Generally, what significant empathy I have for the challenges currently faced by Congress and the President is due to my NSF service. It also makes me sick that so many members of our government appear to be so disrespectful of each other.

Despite my fears about the Coburn report, I think that some good things could come of it and the responses that are following. Notably, I hope that scientists, after reading the report, see the vital importance of communicating science to the public, to include Congress and scientists in disciplines other than their own. I’d like to see every research team be associated with those skilled in communicating scientific findings and their national and international relevance to the public. Universities have individuals skilled in communicating science to the public, but they are probably too few and far between – communication can be integral to research projects, and researchers can ask for the funds necessary to support that activity. Rather than blog posts erupting after reports such as this to explain and justify scientific research, maybe we’ll see more proactive outreach.

Generally, my experience suggests that scientists and engineers can more embrace their role as citizens, and all citizens should recognize that science and engineering are integral to citizenship. I could be wrong, because I might be working from a biased sample, but my sense is that not a lot of scientists write their elected officials, advocating more funding for science research and the like. I hope that scientific professional organizations not only respond themselves, which is happening, but that they encourage civic engagement by their individual members. Its probable that the climate scientists have internalized this message over the last few decades, and the social scientists may be getting it as well.

In any case, while my 3+ years at NSF reenergized the teacher in me and the researcher in me, it may have reenergized the citizen in me most of all. I hope so.

Virtual Panelists and Thoughts on Assessing Science


(Copied from a February post on my "home" blog -- its still timely)

I was quoted (correctly) in a commentary in Science entitled “Meeting for Peer Review at a Resort That’s Virtually Free” (http://www.sciencemag.org/content/331/6013/27.full) — nice. The article talks about the advantages of using virtual technology to convene scientific review panels at the National Science Foundation (NSF) and other Federal agencies like NIH instead of flying the panelists cross country for a physical meeting of a day or two. The article focuses on the very cool technology of virtual worlds, like Second Life (http://secondlife.com/), to host such activities, but video conferencing and phone are alternatives. Some might think that a phone is primitive technology; perhaps, but a land line is extremely reliable and not as primitive as an airplane, at least relative to the task of putting talking heads together.

In the Science article the lead reason presented for using virtual participation for NSF (NIH, etc) is that it saves money. I am somewhat conflicted on how to talk about this. On the one hand, the article says that approximately 19,000 reviewers were used by NSF last year; even if all of these had traveled to NSF, but were replaced by 19,000 virtual panelists, saving $1000 per panelist (all VERY optimistic), that would be savings of $19,000,000 (19 million); that may seem a lot, but only a dent in NSF’s 7 Billion dollar yearly budget, which in turn is only a wafer thin (0.0002) slice of the federal budget of 3.64 TRILLION dollars; http://nationalpriorities.org/en/resources/federal-budget-101/budget-briefs/federal-discretionary-and-mandatory-spending/). Nonetheless, the current budget debates suggest that President and Congress are focused on pocket change, so perhaps demonstrating any small budget cut would be the sacrificial lamb needed to buy NSF and science/engineering generally some political goodwill; and when such savings are spread across all such travel across all agencies, it could be significant.

I worry though that if monetary savings are presented as the lead story when talking to researchers/panelists, it will convey the (wrong) message that quality in the review process is being sacrificed to save money, and for the researcher/academic this could easily be a source of disappointment, if not resentment — after all, every two days the national debt grows by over 8 billion dollars (http://www.usdebtclock.org/), comfortably more than the yearly NSF budget! Amazing! Deeply discouraging. If virtual participation on panels picks up, it would be terrible to have scientists and engineers (or heck, anyone who cares about the US’s future) falsely believing that a few bucks are being saved out of the hide of science and engineering research in the US. Thus, I winced a bit that monetary savings took the lead in the article.

The reasons for using virtual panelists, and particularly in giving panelists a CHOICE on going to Washington DC (or anywhere, and for any organization) or participating virtually are many. The article does NOT address most of these reasons.

(1) Virtual participation reduces travel wear and tear on panelists — West coasters, rural and others underserved by airports, those overseas — this was alluded to in the article; travel is a great inconvenience/impossibility for many, but its a pleasure to many others, and/or an opportunity to network. I like a system in which panelists weigh the costs, benefits and choose for themselves on whether to attend physically or virtually. So, rather than suggesting an artificial dichotomy between all physical and all virtual, lets recognize that there are hybrids that allow both. Behavioral economics suggests that one can influence the proportions of the two kinds of participants by making one form (e.g., virtual) the default, and indicating the other option (e.g., physical) as welcome. If default specification strikes some as “mind control,” I suggest that its preferable to requiring one form (e.g., physical).

(2) Virtual participation broadens participation to many who might not otherwise serve — parents of young children, senior and very busy researchers, those who have to teach a class, and/or those who an agency might not otherwise ask because of monetary cost, such as those overseas, who may well have special expertise in an area that would benefit the US.

(3) Virtual participation reduces wear and tear on government agency staff; this reason probably would be the most under-appreciated by those outside of government agencies; I am talking here about administrative staff primarily — arranging catering, cleanup, travel and reimbursements, and many other misc responsibilities require a lot of time. Many federal admin staff commute an hour or more EACH way and have oodles of other responsibilities to which they must attend. Early in my 3-year NSF career I attended an outreach talk at San Jose State University, in which the NSF/OLPA presenter showed a graph of the number of proposal actions by NSF, and these actions were growing at what was clearly greater than a linear function over the last 25 years. In this same period staffing numbers remained flat. The implications for workload are obvious. I don’t see these trends changing — even when the President and Congress agree to funding increases, it is for new programs that come with additional overhead, and not for increased staff; the research community, given the historically very low funding rates, will continue to push proposal pressure up. Streamlining in operational efficiency is absolutely necessary. Using virtual participants offloads burden from staff, so that they can do other necessary things and maintain quality.

(4) Virtual participation can improve important aspects of the proposal vetting process; giving choice to prospective panelists can only increase acceptance rates among them, increasing the number of first choices among the experts, and reducing workload by those having to research and solicit prospective panelists. Again, these time savings get channeled into other important activities.

(5) Virtual participation does not diminish the quality of information necessary to make funding recommendations. Some nominal skills as a moderator are required to insure that all panel voices are heard, important issues debated, etc, but one could accurately assert the same need for nominal skills for all-physical panel moderators too. My perception is that virtual participants are every bit as well prepared as physical panelists and as attentive, and not harried or worried about catching a flight. I sometimes hear push back that there is “something about physical presence”, and there *is*, and much of its good, but its irrelevant for purposes of making science funding recommendations. Indeed, facial expressions, winks, and hand gestures are relevant to surviving together in the jungle, but if they are really nontrivial factors in panel recommendations, then respectfully, you are begging for over fitting the data.

(6) Virtual participation decreases ecological footprints. In almost all white papers on the environmental impacts of using information technology, the most common *proposed* POSITIVE impact is to offset footprints in other sectors, notably travel. And so why isn’t this actually happening?! This is the low hanging fruit of the promise of information technology, and if technologists don’t start exercising its promise, there is really no point in expecting others to do so. That said, I have a friend since kindergarten who is an airline pilot, a sister-in-law who is a flight attendant, a friend whose spouse is a pilot — facing the financial consequences for many people of changing lifestyles has to be part of what we worry about — but this worry can’t stop us from acting to change unsustainable lifestyles either!

Again, there are things important about physical presence that can’t be beat — at a minimum, the intellectual and social reward to a panelist, as well as the moderator — I loved dinner out with those few panelists that came to NSF for panels that I ran, Networking is important too, where it is possible that hallway talk will lead to an exchange of ideas and perhaps even collaboration. But this doesn’t happen much, certainly not to a degree suggested by theory, with hallway talk getting displaced by checking email and cellphone chatter. But if this networking were really a desired capability, then lets do it deliberately, do it by design, and do it virtually, in small groups and at regular intervals — why leave it to chance, as some accident of flying people across the country.

But in any case, I would hope that allowing choice to attend physically versus participate virtually was part of meeting designs in the future. Heck, if the timing was right, I could set up other meetings, sightsee and otherwise make good use of the trip, I’d go physically — I don’t know of anyone who likes slapping a back as much as I do.