How this EB-1A vs EB-2 NIW scorer works
EB-1A and EB-2 NIW are both self-petition-friendly pathways, but they reward different kinds of proof. EB-1A generally centers on sustained acclaim and evidence that maps to the regulatory criteria (for example: awards, judging, leading roles, memberships with selection criteria, media coverage, and original contributions). EB-2 NIW generally centers on national importance, a credible plan, and whether waiving the job offer/labor certification is justified.
This calculator turns common evidence signals into two comparable numbers: an EB-1A score and an EB-2 NIW score. The goal is not to “approve/deny” you. The goal is to help you see which evidence categories are currently doing the most work for each pathway and which categories are weak or missing.
What you enter (and how each input is interpreted)
Each field below is intentionally simple so you can run scenarios quickly. When a field is a count, enter the number of items you can document (not the number you hope to obtain). When a field is a 0–10 rating, use a consistent internal rubric and be conservative.
- Field of endeavor: a short label (e.g., “biomedical engineering”, “applied AI for energy”). It appears in the results summary and CSV.
- Publications: peer-reviewed publications you can list and evidence (include accepted/in press only if you can document acceptance).
- Citations: total citations across your work (use a single source consistently, such as Google Scholar; avoid mixing sources).
- Major awards: capped at 3 in the model to avoid one input dominating. Count awards that are plausibly national/international in scope.
- Judging events: instances of peer review, panel judging, or comparable evaluation of others’ work. The model scales this by 1/2 to reduce inflation from very frequent reviewing.
- Original contributions (0/1): enter 1 if you can document original contributions of major significance with practical impact; otherwise 0. (This is intentionally binary.)
- Leading/critical roles: count roles where you can show you were leading or critical to a distinguished organization or project.
- Media coverage: count meaningful third-party coverage (not self-published posts). The model normalizes by dividing by 3.
- Memberships with selection criteria: count memberships that require outstanding achievements/selection (not paid subscriptions). The model scales this by 1/2.
- Patents commercialized: count patents with evidence of commercialization or real-world adoption (not merely filed).
- Letters of support: count strong, independent letters (ideally from experts not closely tied to you). The NIW model normalizes by dividing by 5.
- U.S. national benefit plan clarity (0–10): how clear and feasible your plan is (milestones, methods, and why you are well-positioned).
- Policy alignment evidence (0–10): how well your work aligns with U.S. priorities (e.g., public health, energy, critical infrastructure), supported by citations to credible sources.
- Job offer reliance (0–10): 0 means you are not dependent on a job offer; 10 means your case depends heavily on a specific offer/employer. The NIW score is penalized as reliance increases.
Model formulas (transparent, simplified)
The calculator uses a compact “research signal” plus additional evidence weights. Research signal is designed to grow with citations but with diminishing returns:
Research signal:
R = ln(citations + 1) / 5 + publications / 20
Then it computes two pathway scores:
EB-1A score:
1.2×R + 4×(awards + judging/2 + leadership + memberships/2) + 6×contributions + 5×(media/3) + 3×patents
EB-2 NIW score:
1.1×R + 3×contributions + 2×patents + 0.8×(benefit/10) + 0.6×(policy/10) + 4×(letters/5) − 0.5×(offer/10)
These weights are not a legal standard; they are a practical way to compare scenarios consistently. If you change one input (for example, add two independent letters), you can see how much that moves NIW versus EB-1A.
How to use the calculator (recommended workflow)
- Start conservative: enter only evidence you can document today.
- Run a baseline: click Calculate scores and read both the score and the risk notes.
- Run two scenarios: (a) “best documented” and (b) “after improvements” (e.g., more letters, clearer national benefit plan, additional media coverage).
- Compare deltas: focus on changes that move the score meaningfully and are realistic for you to obtain.
- Export: use Download CSV to save the scenario you just evaluated.
Worked example (using the same logic as the calculator)
Suppose an applied AI scientist in climate modeling has: 20 publications, 600 citations, 2 major awards, 6 judging engagements, 2 leading roles, 3 media pieces, 1 commercialized patent, 7 strong independent letters, an 8/10 national benefit plan, 9/10 policy alignment, and low job-offer reliance (2/10). They also have original contributions with practical impact (1).
First compute research signal:
R = ln(600+1)/5 + 20/20 ≈ 1.28 + 1 = 2.28
(Your exact value may differ slightly depending on rounding.)
EB-1A score (showing the same structure as the script):
1.2×R + 4×(awards + judging/2 + leadership + memberships/2) + 6×contributions + 5×(media/3) + 3×patents.
With memberships set to 1 (so memberships/2 = 0.5), judging/2 = 3, media/3 = 1:
≈ 1.2×2.28 + 4×(2 + 3 + 2 + 0.5) + 6×1 + 5×1 + 3×1
≈ 2.74 + 30 + 6 + 5 + 3 = 46.74.
NIW score:
1.1×R + 3×contributions + 2×patents + 0.8×(benefit/10) + 0.6×(policy/10) + 4×(letters/5) − 0.5×(offer/10).
With benefit=0.8, policy=0.9, letters/5=1.4, offer/10=0.2:
≈ 1.1×2.28 + 3 + 2 + 0.8×0.8 + 0.6×0.9 + 4×1.4 − 0.5×0.2
≈ 2.51 + 3 + 2 + 0.64 + 0.54 + 5.6 − 0.1 = 14.19.
Interpretation: this profile is very strong on EB-1A-style acclaim signals (awards, judging, leadership, media) and also has a credible NIW narrative. If your NIW score is close to your EB-1A score, NIW may be attractive because it can be more flexible on the “acclaim” framing—especially when the national benefit story is clear.
Comparison table: what each pathway tends to reward
| Factor | EB-1A emphasis (in this model) | EB-2 NIW emphasis (in this model) |
|---|---|---|
| Publications & citations | Higher weight via 1.2×R | Moderate weight via 1.1×R |
| Major awards | High (4 points each; capped at 3 awards) | Indirect (not a direct input in NIW formula) |
| Judging / peer review | Supports acclaim (scaled by /2) | Not directly scored, but often supports “well-positioned” in practice |
| Leadership roles | Strong signal (4 points each) | Not directly scored, but can strengthen plan credibility |
| Media coverage | Directly scored (normalized by /3) | Not directly scored, but can support national importance narrative |
| Patents commercialized | Directly scored (3 points each) | Directly scored (2 points each) |
| Letters of support | Not directly scored in this model | Directly scored (normalized by /5) |
| National benefit plan & policy alignment | Not directly scored in this model | Directly scored (0–10 inputs normalized to 0–1) |
| Job offer reliance | Neutral in this model | Penalized as reliance increases |
Limitations and assumptions
This scorer is intentionally simplified. It helps you compare scenarios consistently, but it cannot capture the full legal and evidentiary analysis used in real petitions.
- Evidence quality is not measured: two “letters” can be very different in credibility, independence, and specificity; the model counts them equally.
- Service-center and trend effects are excluded: RFE patterns, adjudicator discretion, and evolving policy guidance are not modeled.
- Field differences are compressed: citation norms vary widely across disciplines; the same citation count can mean different things in different fields.
- Nonlinear realities: real cases can hinge on a single missing element (e.g., weak documentation) even if other signals are strong.
- Not a substitute for counsel: use this as a planning aid and discuss strategy with a qualified immigration attorney.
Privacy note: data entered here never leaves your browser; it is not stored or transmitted by this page.
Input your evidence profile
Results
Enter your details to see comparative scores.
EB-1A snapshot
Score: –
Risk notes: –
EB-2 NIW snapshot
Score: –
Risk notes: –
Practical next steps after you get your scores
Use the scores as a prioritization tool. If EB-1A is much higher, your profile may already be stronger on acclaim-style evidence; consider whether you can document the strongest criteria cleanly and whether your narrative supports “sustained acclaim.” If NIW is close or higher, focus on strengthening the national benefit story: a clear plan, independent letters, and credible policy alignment.
- If EB-1A is lagging: improve documentation for awards, judging, leading roles, and media; ensure each item is well-evidenced and clearly attributable to you.
- If NIW is lagging: tighten the national benefit plan, add independent letters, and cite objective policy sources showing why your work matters broadly.
- If both are low: treat this as a gap analysis. Identify the 2–3 evidence categories you can realistically strengthen in the next 3–6 months and rerun the calculator.
