X-Git-Url: https://code.communitydata.science/opensym2017_postmortem.git/blobdiff_plain/81a31bb2835c4270d9303334c83c5c3e530c3a70..b05fc891f3a78688d185d751b29700c541d3f2e5:/opensym2017_postmortem.Rmd?ds=inline diff --git a/opensym2017_postmortem.Rmd b/opensym2017_postmortem.Rmd index 62db158..5ef4bfc 100644 --- a/opensym2017_postmortem.Rmd +++ b/opensym2017_postmortem.Rmd @@ -86,12 +86,12 @@ Along with [Claudia Müller-Birn](https://www.clmb.de/) from the [Freie Universt In OpenSym 2017, we made several changes to the way the conference has been run: * In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, **we used a single track model**. -* Because we eliminated tracks, we also eliminated track-level chairs. Instead, **we appointed a series of Associate Chairs or ACs**. +* Because we eliminated tracks, we also eliminated track-level chairs. Instead, **we appointed Associate Chairs or ACs**. * **We eliminated page limits and the distinction between full papers and notes**. * **We allowed authors to write rebuttals before reviews were finalized.** Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals. * To assist in assigning papers to ACs and to reviewers, **we made extensive use of bidding**. This means we had to recruit the pool of reviewers before papers were submitted. -Although each of these things have been tried in other conferences, all were new to +Although each of these things have been tried in other conferences, all were new to OpenSym. # Overview @@ -157,7 +157,7 @@ ggplot(data=topics) + aes(x=topic, fill=decision) + theme(legend.position="bottom", legend.direction = "horizontal") ``` -The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym's submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair [Lorraine Morgan's](https://www.nuigalway.ie/our-research/people/lorrainemorgan/) involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past. +The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym's submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair [Lorraine Morgan's](https://www.nuigalway.ie/our-research/people/lorrainemorgan/) involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past. # Scores and Reviews @@ -177,11 +177,11 @@ ggplot(data=scores) + aes(x=sub.id) + ``` -The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It's important to keep in mind that two papers were *submitted* as posters. Although Associate Chairs made the final decisions on a case-by-case basis, most papers that had an average score of less than 0 (the horizontal orange line) were rejected and most papers with positive average scores were accepted. We ultimately accepted `r num.papers.accepted` papers (`r paste(round(num.papers.accepted / (nrow(submissions) - 2)*100), "%", sep="")`) of those submitted. +The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It's important to keep in mind that two papers were *submitted* as posters. Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected and most (but not all) papers with positive average scores were accepted. We ultimately accepted `r num.papers.accepted` papers (`r paste(round(num.papers.accepted / (nrow(submissions) - 2)*100), "%", sep="")`) of those submitted. # Rebuttals -This was the first time that OpenSym used a rebuttal or author response and were thrilled with how it went. Although they were entire optional, almost everybody used it! Authors of `r length(reviews[reviews$label == "Author response","sub.id"])` of our `r nrow(submissions)` submissions (`r round(length(reviews[reviews$label == "Author response","sub.id"]) / nrow(submissions)*100)`%!) submitted rebuttals. +This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entire optional, almost everybody used it! Authors of `r length(reviews[reviews$label == "Author response","sub.id"])` of our `r nrow(submissions)` submissions (`r round(length(reviews[reviews$label == "Author response","sub.id"]) / nrow(submissions)*100)`%!) submitted rebuttals. ```{r, echo=FALSE} # histogram of changes @@ -222,7 +222,7 @@ Although, I won't post any analysis or graphs, bidding worked well. With only tw Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. *But everybody needs to bid*. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don't bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get good matches become less interested, willing, or able to do their reviews well and on time. -Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a PC before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentive or communicate the importance of getting your PC members to bid. +Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a PC before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid. # Conclusions @@ -234,5 +234,5 @@ Finally, it's also been announced that [OpenSym 2018 will be in Paris on August # This Analysis -OpenSym used the gratis version of [EasyChair](https://www.easychair.org/) to manage the conference which doesn't allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a [knitr](https://yihui.name/knitr/) file that combines R visualization and analysis code and markdown. I've made all the code I used to produce this analysis available in [this git repository](FIXME). I hope someone else finds it useful. Because the data contains sensitive information on the review process, I'm not going to publish the data. +OpenSym used the gratis version of [EasyChair](https://www.easychair.org/) to manage the conference which doesn't allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a [knitr](https://yihui.name/knitr/) file that combines R visualization and analysis code and markdown. I've made all the code I used to produce this analysis available in [this git repository](https://code.communitydata.cc/opensym2017_postmortem.git). I hope someone else finds it useful. Because the data contains sensitive information on the review process, I'm not going to publish the data.