Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continue efforts on prototype tasks and stalled tasks #30

Open
mbernst opened this issue Dec 22, 2017 · 7 comments
Open

Continue efforts on prototype tasks and stalled tasks #30

mbernst opened this issue Dec 22, 2017 · 7 comments

Comments

@mbernst
Copy link

mbernst commented Dec 22, 2017

Problem

Prototype tasks are the sample tasks that launch on the marketplace before the full task, allowing workers to give feedback and throttle the task if it is poorly designed. They were designed and developed by the Collective as a central thrust and included in the original platform version and original paper submitted by the group.

Stalled tasks are live tasks that have stopped getting work done on them, nothing is happening --- and requesters typically don't know why.

As Daemo has grown, we have gotten useful feedback, including:

  • Workers complain that they give feedback on the task but requesters don't incorporate it
  • Workers tend to be very permissive in allowing tasks to launch, but then don't do them, indicating that perhaps the task wasn't actually competitive
  • Requesters complain that tasks are stalling without any useful information about how to fix them. (This happens on AMT too, but we think we have an opportunity to do way better.)

Given how important prototype tasks have been, and how a stalled task provides a bad experience for requesters, this is a great opportunity to take that feedback and iterate.

Proposal

This strategic proposal continues our focus on prototype tasks and stalled tasks. These both seem like they offer major experience improvements to requesters and clearer, better work for workers. In addition, stalled task analysis seems like it could potentially be combined with prototype tasks into an excellent research paper focusing on helping debug task design, allowing us to publish prototype tasks in an archival venue as we have always wanted.

Potential foci for prototype tasks include:

  • Find ways to make sure feedback is incorporated, or signal that if the task is good the feedback may be ignored, or only ask for feedback if the worker signals the task isn't good enough
  • Iterate on the question asking whether the worker would to do the task themselves, perhaps drawing on some of our DCE experiments
  • Could we allow workers to directly annotate the task to clarify for each others' benefit, rather than in the forum thread below? Essentially make the task more wiki-like, as we have discussed previously

Potential foci for stalled tasks include:

  • Development of an automated measurement signal that a task has stalled
  • Ongoing iteration of the discrete choice experiments that the SCRC has been performing on stalled or prototype tasks, allowing the system to quantify how bad the task is (e.g., 10th percentile on the marketplace) and alert the requester how much of an impact possible changes would have (e.g., raising the price would put it to the 40th percentile, but including examples would put it at the 50th).
  • If successful, automatic launching of this debug DCE and other feedback tasks for stalled work or possibly at prototype stage, and emails to requesters with the results advising them what to do.
    We would wizard-of-oz some of these things (e.g., launching debug tasks, sending results to requesters) until such time as we feel ready to put them into production.

Implications

In the short term, this proposal indicates a focus on problems we face right now rather than creating new designs and features. I personally feel that prototype tasks and stalled tasks are significant research wins, and am inclined to do this one more push so that we can iterate them and resubmit (to a spring conference?), plus solve endemic problems on Daemo.

Long term --- task authorship was one of the major research thrusts that the Collective defined initially, and this proposal helps keep it front-and-center.

Person or People Who Added the Proposal to GitHub

**The name in this section doesn't indicate that the person came up with this idea, unless and until explicitly and clearly mentioned.

@michael Bernstein added it to GitHub


To officially join in, add yourself as an assignee to the proposal. To break consensus, comment using this template. To find out more about this process, read the how-to.

@mbernst mbernst added this to the Strategy milestone Dec 22, 2017
@mbernst mbernst self-assigned this Dec 22, 2017
@markwhiting markwhiting self-assigned this Dec 22, 2017
@neilthemathguy
Copy link

neilthemathguy commented Dec 23, 2017

Can you point me to the hangout or conversation where this was stated by @dmorina?

@mbernst said

Long term --- a question that we may need to answer as we pursue this proposal, which @dmorina raised: Daemo could react quickly and tell the requester if the task needs to be improved to get results or better results. He points out that to have a high quality marketplace, you need to be aggressive about pruning bad tasks and telling requesters they need to fix it or go elsewhere. [.....]

What is the strategy to figure out what tasks are bad before pointing fingers on the requesters? How will Daemo react quickly? We should not forget that requesters are paying money to launch these tasks. The platform doesn't even have 10 requesters and you want them to go elsewhere!

@mbernst said

If successful, automatic launching of this debug DCE and other feedback tasks (on @mbernst's research budget for now) for stalled work or possibly at prototype stage, and emails to requesters with the results advising them what to do.

@qwertyone (anotherhuman) has raised finance related issue in #38. This will help us first figure out the financial ways to support the collective and not just funds or grant money from Stanford or a PI.

@mbernst
Copy link
Author

mbernst commented Dec 25, 2017

Can you point me to the hangout or conversation where this was stated by @dmorina?

I remember it coming up in conversation around the [anonymized --- please ask on Slack] task in Nov-Dec. The collective ran a "debug" task to diagnose why it was stalling, and sent the results to the requester. The requester said that there were technical and design reasons that they didn't want to make the changes requested by the workers. So the task remained on the marketplace, but stalled, and there was some feeling by the requester that it was Daemo's fault. (As it happened, the requester later launched an identical task on AMT and it stalled there too.) @dmorina if you want to say more here, it would be appreciated.

What is the strategy to figure out what tasks are bad before pointing fingers on the requesters? How will Daemo react quickly? We should not forget that requesters are paying money to launch these tasks. The platform doesn't even have 10 requesters and you want them to go elsewhere!

This comment makes me feel attacked. Trying to set that aside: the point is that we are going to have to make tough decisions about this. Daemo already does, for example by allowing workers to prevent tasks from launching out of prototype mode. We have had discussions about the value of regulating bad behavior (e.g., poor work, poor tasks) as in the Kraut and Resnick book on online communities.

Trying to put it more neutrally, the situation that we will have to design for collectively as part of this effort will be: suppose Daemo knows what would need to be done to make a task's results better or to encourage workers to do it, but the requester refuses and says that the workers are bad or Daemo is bad.

@mbernst
Copy link
Author

mbernst commented Dec 25, 2017

@qwertyone (anotherhuman) has raised finance related issue in #38. This will help us first figure out the financial ways to support the collective and not just funds or grant money from Stanford or a PI.

Great. In the meantime, I'm just indicating that I view this as an open research question and am happy to help support it. Eventually this needs to be made cost-neutral or the platform couldn't do it...for example ask the requester to pay money to launch the debug task.

@neilthemathguy
Copy link

neilthemathguy commented Dec 25, 2017

This comment makes me feel attacked. [...]

I'm trying to understand what was said here that felt attacked? There is no intention, whatsoever, of hurting anyone’s feelings. The way above proposal is written seems contradictory to the original intention of empowering requesters, workers, and Crowd Researchers to work together and solve problems of crowdsourcing platforms. How will we build community of requesters, workers, and crowd Researchers if some of us think they should "go elsewhere!" if we are not good at explaining what quality means. Many crowd-researchers feel disrespected. Now we want to say requesters should go elsewhere, when we haven't even started.

Also why there is a need for highlighting single individuals. Aren't there other people who raised these questions in the past or led this area of research? The constant advocacy for exclusion of first crowd-researchers, now requesters, and then next on line might be workers won’t lead us anywhere. We need to carefully think what is the objective of this whole initiative and how rest of the world sees this. Is it to just create a platform and impose individualistic ideas OR foster collaboration among stakeholders to solve larger problems.

I and many others are already being targeted and attacked for standing up for the ethical conduct and values of community. It is frustrating to see that every attempt that doesn't align with the thinking of the few is seen as adversarial move or ill-will.

@mbernst
Copy link
Author

mbernst commented Dec 25, 2017

OK, my comments did not convey effectively what I was intending to communicate. I will reflect on this.

@neilthemathguy
Copy link

neilthemathguy commented Dec 25, 2017

[...] "go elsewhere!" [...] shows way more attitude and arrogance.

I think it would be good to respect the governance process rather than trashing it out.

"If members feel they are not fully knowledgeable about the context of the proposal, they can seek this information from the submitter."
Link

@mbernst
Copy link
Author

mbernst commented Dec 27, 2017

After discussion in the hangout, I have updated the proposal above. Thank you for the feedback --- hopefully this is more in line.

@shirishgoyal shirishgoyal self-assigned this Dec 27, 2017
@neilthemathguy neilthemathguy self-assigned this Dec 28, 2017
@shirishgoyal shirishgoyal removed their assignment Jan 8, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants