Skip to content

Fix Bugs & Secuirty

Bugs:

  • Tagging.esclusion_reason should be the label, not the exclusion_criteria. Altough then we also only print the labels on the left hand side instead of exact exclusion_criterias.
    • Thus we need to update API call, so that we get both (label and exclusion reason).
    • Also we have to update frontend accordingly.
  • Fix redirect callback bug(on 1. loging sometime we are stuck on the callback side.) Doesnt happen on the deployed version.
  • Correct progress on dashboard (if done == 100%)
  • Correct tag. Aka we need the correct tag to the state in Frontend / Backend (eg after clicking the continue to manual screening button, state should == manual screening state)
  • Current states need to be updated after clicking button as well => update that in api calls.
  • If user is creating first review we need to store him to DB.
  • We need to check if the automation level works as expected. Aka that the threshold etc. are working as expected in get_paper_model_and_tagging_model and all with similar functionality.
  • (should be solved by one above) Manual screening correcte paper id / papers to screen (eg we have 100 but only 10 of those need to be screened by human)
  • (should be solved by two above, aka only because of the weird setup) Total excluded in statistics sometimes > total papers (case if eg neutrino review excludes 10 paper and the user then rates them again. Not sure if this only occurs in my weird test setup and will be fixed as soon as we test with real data. Because it should NOT happen that neutrino exclude and then the user excludes as well right?
  • automated-screening progress is also weird, it takes some time, then it sends 5 requests at once (mby error on frontend side?)
  • Manual screening page n / m shows wrong numbers. I think m should work now, but n might be wrong, needs to be testet.
    • Yes it seems that n is a static number / gets increased by one each time we rate a paper. We might have to return something in api call as well (eg where we get papers to look at, create an additional query where we say it has to have been looked by human AND llm_tag <= threshold).
    • So atm we aren't shown the correct pages if we leave the review or reload the page for example i think. Except frontend is handling this for us with the useState() thing?
  • Also check if everything works fine if we select more then one label / exclusion reason.
  • Check if numbers on result page make sense (eg that we don't have something like included + excluded is larger then total amount of papers because included and excluded things get counted twice (LLM tag, user tag)

Security:

  • Update CORS middleware to only allow specify origins, headers, ...
  • Deactivate docs / redoc
  • Reomve dangerous API calls (eg get all users, wich was used for testing)
Edited by Stefan Schorkmeier