Every Indian engineering college talks about employability. Every NAAC SSR has a criterion for it. Every NBA program self-assessment touches it. Every Principal’s annual address mentions it. And yet if you ask three Deans in the same institution what “employability outcomes” actually means as a measurement, you will get three different answers, and the most common one will be “placement percentage.”
Placement percentage is not employability. The gap between the two is where most of the work for the next decade of Indian engineering education sits.
This piece covers what employability is when it is taken seriously, why an LMS cannot track it, and what changes when employability is treated as a layer of Intelligent Learning Infrastructure rather than a separate spreadsheet maintained by the placement cell.
The four things colleges currently call employability
In most Indian engineering colleges, “employability outcomes” collapses one of four very different measurements into a single number reported to NAAC, NBA, or AICTE.
Placement percentage. Number of students placed divided by number of students eligible. The most reported, the least informative. A college can have 95% placement and a graduating cohort that is fundamentally underprepared, because the metric counts the offer, not what the offer is for, and certainly not whether the student keeps the job.
Average and median package. Slightly better. Captures the quality of offers in aggregate. But heavily skewed by a handful of top offers, and tells you nothing about the bottom quartile of the cohort, which is where the institution’s actual employability work happens or fails to happen.
Higher studies and entrepreneurship. Counted in NAAC and NBA frameworks but rarely tracked with rigour. A student who took a year off and is now preparing for GATE shows up the same way in the dataset as a student who is actively running a registered company.
Sector and role distribution. Almost no college tracks this consistently. Which means the institution cannot answer the question “are our CSE graduates going into actual software engineering roles, or are 60% of them in BPO and support functions?” — even though the answer to that question matters more to NBA Tier-1 outcomes than any of the placement percentages.
Reporting any one of these in isolation is reporting employability the same way a doctor would diagnose a patient using only their height. The number is real. It just doesn’t answer the question.
What employability actually is
A working definition that holds up under NBA, NAAC, and NEP 2020 scrutiny:
Employability is the verifiable readiness of a graduating student to perform a role in the labour market that corresponds to the program they completed, sustained over the first two years post-graduation.
Five things are inside that sentence. Each one needs to be tracked separately:
- Verifiable readiness — assessment data that demonstrates the student has the technical, cognitive, and behavioural skills the role requires. Not the marks they got. The skills they actually have.
- Graduating student — measured at the point of leaving the institution, not measured retroactively after offers are received.
- Role in the labour market — what the student actually does for work, not what they were offered or what their package was.
- Corresponds to the program — a B.Tech CSE graduate working as a software developer is an employability outcome. The same graduate in a BPO role is a placement outcome but an employability gap.
- Sustained over two years — whether the student is still in the role, has moved up, or has dropped out of the workforce.
A college that is tracking only one or two of these is reporting placement data and calling it employability. That is the gap NBA assessors are now starting to probe in Tier-1 reviews, and it is the gap NEP 2020 implementation is going to force every institution to close.
Why this cannot live in an LMS
The five components above involve data that an LMS was never designed to hold:
- Verifiable readiness requires continuous assessment data across the four years, not just end-of-semester marks. The LMS holds marks.
- Skill mapping to roles requires CO-PO attainment data linked to industry skill frameworks. The LMS holds course outcomes if you have set them up. It does not hold the link to roles.
- Actual employment status requires a feedback loop from students after graduation. The LMS has no relationship with alumni.
- Two-year sustenance requires alumni tracking. The LMS does not track alumni; the institution’s ERP might, badly.
- Behavioural and cognitive readiness requires data from diagnostic tools that most LMS platforms do not include and cannot integrate cleanly.
This is why most placement cells run on a separate spreadsheet, why most alumni offices run on a third one, why most NBA self-assessment teams pull data from four different sources and reconcile it manually in the six months before the visit, and why the resulting numbers are honest approximations rather than verifiable measurements.
It is also why this work has to live somewhere other than the LMS.
Employability as a layer of Intelligent Learning Infrastructure
Intelligent Learning Infrastructure, or ILI, is the category of software that sits above the LMS and ERP, connecting the data that those systems hold separately into something the institution can actually use. Employability is one of the layers ILI is specifically built to enable, because the data required for honest employability tracking is exactly the data that gets stranded across LMS, ERP, placement spreadsheet, and alumni file.
In an ILI, employability is not a module bolted on to the LMS. It is a view of the same student data the institution is already collecting, organised around the question of readiness rather than the question of activity.
What that looks like in practice:
Year one, semester one. The 7AI diagnostic profile is built for every incoming student across aptitude, attention, learning speed, topic mastery, coding skills, personality, and career fitment. This is the baseline readiness measurement, taken at the start of the four years rather than reconstructed at the end.
Years one through four, every semester. CO-PO attainment data flows in automatically from the TEATAR teaching workflow. Course outcomes map to program outcomes, program outcomes map to graduate attributes, graduate attributes map to industry skill frameworks. The student’s readiness profile updates continuously as evidence accumulates. No retrospective reconstruction in the SAR.
Year three, mid-point. The Placement Officer can pull a list of every student in the graduating cohort with their current readiness state mapped against the roles they have indicated interest in. A student with a CSE program outcome attainment of 78% but a software-development skill mapping of 41% appears as an actionable signal twelve months before placement season, not as a regret afterwards.
Year four, placement season. Offers received feed back into the same dataset. The institution can now answer not just “what is our placement percentage” but “what is the correlation between our 7AI career-fit index and the roles students actually accepted.” That is the data NBA and AICTE are going to start asking for. Most colleges cannot answer it. An institution running on ILI already has the answer.
Years five and six. Alumni feedback feeds into the same dataset through structured check-ins at the six-month, one-year, and two-year mark. The institution can now answer the sustained-employability question: of the students placed in roles corresponding to their program, what fraction are still in those roles or have advanced, and what fraction have left.
Across the full cycle, the data is in one place, the questions can be asked at any point in the four years rather than only in the SAR-writing window, and the answers are verifiable rather than approximated.
What this changes for the institution
Four things, in roughly the order they become visible:
The Placement Officer’s job changes. The job stops being a year-four scramble to get every student an offer of any kind, and starts being a four-year process of closing readiness gaps before placement season begins. The placement cell becomes proactive rather than reactive.
The NBA and NAAC documentation work shrinks. Employability data for the SAR, the SSR, and the AQAR builds itself as a by-product of normal teaching and placement activity. The six-month documentation sprint before an accreditation visit becomes a two-week review.
The conversation with industry partners deepens. The institution can answer questions from recruiters about specific cohort readiness with data rather than confidence, which changes the kind of conversations recruiters are willing to have and the kind of roles they are willing to bring on campus.
The Principal can answer a question that currently has no good answer. “Are our graduates employable, and how do we know?” stops being a question that produces a placement percentage and a hopeful tone. It becomes a question with a sourced, defensible, multi-dimensional answer.
What decision makers should ask in your institution this week
Three diagnostic questions worth taking to the next IQAC review or HOD meeting:
- If a recruiter calls today and asks “what fraction of your final-year CSE students are software-development-ready against the JD I have just shared,” can anyone in the institution answer in a way that is data-backed rather than estimated?
- When the next NAAC cycle asks for employability data beyond placement percentage, can the team assemble verifiable evidence for skill readiness, role correspondence, and alumni sustenance — or will it be a six-month reconstruction project?
- How many distinct systems, spreadsheets, and people does the current employability dataset live across, and what happens to that data when any one of them changes?
If the honest answers are “no,” “reconstruction project,” and “more than four” — the institution is not short of a placement strategy or an alumni outreach plan. It is short of the infrastructure that would let either of those work.
That infrastructure is ILI. Employability is one of the layers it makes possible.

