Reference no: EM133955848
Research Assistant
Research Readines
As a prospective PhD research assistant preparing to work with a faculty advisor in Machine Learning:
Your goal is not just to understand papers, but to demonstrate research maturity, critical thinking, technical depth, and initiative, the exact qualities a PhD scholar looks for when selecting a research assistant.
This project evaluates your readiness to contribute meaningfully to ongoing research, not just follow instructions.
The project purpose : you must be able to:
Read ML research papers deeply and efficiently
Critically evaluate methods, assumptions, and experiments
Synthesize ideas across papers and Communicate insights professionally Propose credible research directions inspired by prior work
The final outcome will be used by the instructor to assess your ability to think like a future PhD scholar
The project is designed to:
Strengthen your ability to read and interpret ML research papers
Develop critical thinking and experimental design skills
Prepare you to collaborate with a PhD advisor on research
Papers description :
conduct a deep, critical study of the five topic and submit five research papers: The goal is not just write papers, but also being able to justify:
1. Why the authors (you) made specific modeling choices 2. How each paper fits into the broader research landscape 3. Where the limitations and opportunities for extension lie
Get professional assignment help from qualified experts—on time, every time.
Here are four description from which you will create 5 separate ready to submit research papers (min pages 15-20 pages at least excluding references and into)
GNN for Multi-Echelon Demand Forecasting
Abstract: I Built a graph-neural-network prototype with message passing over a multi echelon interaction graph and a variational last layer to output calibrated uncertainty for risk aware forecasts. Benchmarked against a deterministic Long
Short Term Memory network (LSTM) baseline and observed improved error and coverage characteristics; produced reliability diagrams and coverage-error curves.
Implemented message-passing with edge features and a Bayesian output layer; tracked Expected Calibration Error and nominal coverage under distribution shift.
Designed ablations on graph topology, edge connectivity, and loss weighting for risk sensitive objectives.
Evidence-first Retrieval-Augmented Generation with Conflict Auditing, University of Washington
Abstract: Prototyped a detect then decode retrieval augmented generation pipeline. First retrieved documents with BM25, dense passage retrieval and FAISS, then re ranked candidates with a cross encoder, audited contradictions using natural language inference models and heuristic checks, and finally decoded with span-level citations plus a selective abstention policy when evidence was weak or conflicting
Logged provenance for every claim and localized contradictions across passages; in stress tests with deliberately colliding entities this structured auditing reduced contradiction tagged generations while keeping the fraction of answerable questions essentially unchanged.
Calibration and Deferral for Long-Context QA, University of Washington
Abstract: Replicated temperature scaling and deep ensembles; implemented conformal selective prediction so the system either answers with a risk guarantee or defers.
Measured calibration drift as prompt length grew and as retrieval noise increased produced risk coverage curves and Token-level, span-level, and response level deferral abstention policies.
Explainable Supplier Risk Modelling for SME Inclusion, University of Washington
Abstract: Implemented SHAP-based explanation pipelines on generalized additive supplier risk models to produce human readable rationales and enable fairness audits focused on SME inclusion.
For companis like Sound Credit Union, Built LedgerLens, a fintech copilot for benefits & policy Q&A. Evidence- first retrieval with rule-based reasoning, provenance trails, and conformal prediction to bound error at target risk; abstains under ambiguity or low coverage.
Reading & One-Page Overview (Per Paper)
Prepare a one-page summary for each paper that includes:
Citation
Full title, authors, venue, year, and link
Problem Statement
What problem is the paper trying to solve?
Why is it important (applications / context)?
Key Idea (One Paragraph)
Explain the core approach clearly, as if to a knowledgeable classmate
Main Contributions (Bullet Points) 2-4 concise contributions claimed by the paper
Key Figure or Table
Identify one figure/table that best captures the main result Explain in 2-3 sentences why it matters
Identify one figure/table that best captures the main result Explain in 2-3 sentences why it matters
Deliverable 2A - Deep Dive: Methods & Technical Notes (Per Paper)
Create 2-3 pages of structured technical notes covering:
Model / Algorithm
What model or algorithm is proposed?
Key equations, loss functions, or architectural choices
Pseudocode for the main algorithm (if applicable)
Assumptions
What assumptions are made about data, distributions, or environment?
Are they realistic?
When might they fail?
Training & Evaluation
Datasets used
Metrics used and why
Baselines compared against
Implementation-Level Thinking
What would you need to implement this?
Libraries, architecture sketch, data preprocessing
Potential computational challenges
(You must implement the model-More than thinking execution level demonstratation is required for this project and to assess your research readiness, that shows it's executable and facts mentioned are real and not fabricated. Make sure to analyze and validate )