Shailendra's profile picture

Shailendra K.

Principal Consultant- Machine Learning

Join Catalant to View Shailendra’s Full Profile

Kickstart your next project with our network of 70,000 experts

View Shailendra’s client testimonials and reviews

Take your organization from Strategy to Execution

Join Catalant to View Full Profile
About Me

I have 12+ years of experience with enterprises on large Scale Data Science applications providing Machine Learning expertise in Tactical & Strategic solutions. My personal work background as Sr. Data Scientist has been with American Express & IBM leading global ML teams, before starting my own boutique AI Consulting venture.

In recent past I have consulted for AmEx,  GE Power, Bain & Co, Rio Tinto,  Abbvie, Aegon Religare Life Insurance, Birla Sun Life Mutual fund, Bajaj Finance, Axis Bank and ICICI Lombard GIC on several Data Science projects linked to Credit Risk, Marketing, Sales, Optimization and Voice of Customer functions. My role also involved creating multi year road map to leverage  Data Science. During the course I got several patents and published research papers on Machine Learning & Optimization.

Work Experience
Chief Data Scientist at Valiance Solutions Apr 2016  - present New York/New Delhi

I am leading global teams across US, EMEA and Indian subcontinent region of Data Scientists & Data Engineers to build variety of decision systems. Some of recently completed projects are below

➢Effectiveness of drug on patients. 

➢ Summarization of Technical Literature of Drugs to Marketing Message.

➢ Spend optimization of Pharma Sales.  

➢ Automated Underwriting & Claim processing for Auto Insurance based on vehicle images. 

➢ Optimizing Drilling parameters in large scale Mining Operations, using geospatial data (Diamond Mining).

➢ Augmented Actuarial Models (Auto Insurance).

➢ Application Fraud Scoring Model (Consumer Durable Loan Portfolio).

➢ Personnel Loan Default Score Card & Credit Line Assignment.

➢ Geo-Tagging based Risk Scoring (Consumer Durable Loan portfolio). 

➢ Lapse Prediction Score Card (Life Insurance Portfolio).

➢ Cross Sell Score Card (Life Insurance Portfolio).

➢ Default Application Scoring Model (Auto Finance Portfolio).

➢ Predicting Serviceable Tweets in Indirect Twitter Tweets.

➢ Service Recovery Platform for Non Survey responders (US market).

➢ Attrition Score Card in early Life Stage of Customer (US Charge Card).         

➢ Customer Lifetime Value Models for advertising. 

➢ Health Claim Fraud Prediction.

➢ Neural Network based sales forecasting.

➢ Sales volume forecasting & inventory optimization

➢ Production Line Optimization (Textile)

➢ Customer Re-targeting Platform recommending Adds, Bid Price & discount.

➢ ATM & Branch Cash-flow forecasting & Optimization

➢ Content Classification & Recommendation Engine.

➢ Intelligent Handwritten Character Recognition Platform.

➢ Himalayan Wildlife Species Identification based on Images from Camera Trap.

Specialist - Machine Learning at IBM May 2014  - Sep 2016 Gurgaon, Haryana, India

I ran COE for the group creating ML Models mostly for Industrial clients. Ex Predicting DrivingBehaviour, Energy Conservations, Elevator Breakdowns etc

Sr. Data Scientist at American Express Feb 2012  - Apr 2014 New York/New Delhi

I was subject matter expert on Advance Analytics for Voice of Customer Strategy team, responsible for developing effective customer engagement strategy using advanced Analytics. I was primarily responsible for developing Strategic ideas, Predictive Modelling Initiatives, Analytic Deliveries, and Mentoring of junior team Members. My Job responsibilities encompass 1) Develop Decision Frameworks/Predictive Models for improving Customer Service and Satisfaction. 2) To plan & execute projects to achieve business objective of improving Customer Satisfaction Matrix (RTF, OSAT, FCR) 3) Deliver Analysis, Profiling, Segmentation, actionable Insights from Voice of customer data to partners. 4) Exploratory Data Analysis using all available data sources to achieve Business objective. 5) Develop an Predictive/Exploratory framework of Analysis for Social Media to improve product recommendation, drive revenues and customer satisfaction.

Bachelor of Engineering from Jadavpur University
Aug 2005 - May 2009
Mathematical Simulations, Advance Calculus

Young Innovators Award, Patents & Research Papers
Ph.D (Yet to be awarded) from Indian Institute of Technology, Delhi
Jan 2016 - Dec 2020
Pattern Recognition, Numerical Optimization, Neural Networks, Probability Distribution, Digital Image Processing.

Distributed Machine Learning,
MA Economics from UOU
Aug 2015 - May 2017
Economics, Econometric, Statistical Modelling, Economic Models


Credit Risk Scorecard Monitoring

Nowadays, Retail Banks are more focused on finding or discriminating the right clients and the wrong ones (Defaulters). From a Credit Risk perspective, a Good Client will be a customer/applicant who has least chances to do default (a row risk client) i.e. the applicant has low chances to perform default in his obligations. This detection process of identifying or separating a Good & bad applicant/client is where Credit Risk Scorecard comes into play. It is an automated application which helps banks to consistently assess each client in a shorter period of time ie. To detect his chances of delinquency (Probability of Default). With the integration of loan approval I.T applications it helps banks to speed up the loan application process, reducing the man-hours resulting in increasing the productivity with complete transparency.

Introduction to Reinforcement Learning

Machine Learning can be broadly classified into 3 categories: 1. Supervised Learning 2. Unsupervised Learning 3. Reinforcement Learning

Moving towards world powered by Artificial Intelligence

Machine learning is a very successful technology but applying it today often requires spending substantial effort hand-designing features. This is true for applications in vision, audio, and text Any machine learning algorithm performs as good as provided features are. Let's understand this by using image classification example. When we try to classify an image into "motorcycle" and "Not Motorcycle". An algorithm needs features so that it can draw information from them. Earlier Several researchers have spent decades to hand design these features.

Collecting Twitter Stream : Using Python & MongoDB

Text mining is one of the applications of natural language processing techniques and analytical methods to text data in order to derive relevant information. Over the past few years, text mining got a lot of attention, due to an exponential increase in digital text data from web pages, google's projects and social media services such as Twitter. Twitter data constitutes a rich source that can be used for capturing information about any topic imaginable. This data can be used in different use cases such as finding trends related to a specific keyword, measuring brand sentiment, and gathering feedback about new products and services.


Variable reduction is a crucial step for accelerating model building without losing potential predictive power of the data. With the advent of Big Data and sophisticated data mining techniques, the number of variables encountered is often tremendous making variable selection or dimension reduction techniques imperative to produce models with acceptable accuracy and generalization. The temptation to build an ecological model using all available information (i.e., all variables) is hard to resist. Ample time and money are exhausted gathering data and supporting information. Analytical limitations require us to think carefully about the variables we choose to model, rather than adopting a naive approach where we blindly use all information to understand complexity. The purpose of this paper is to illustrate the use of some techniques to effectively manage the selection of explanatory variables consequently leading to a parsimonious model with highest possible prediction accuracy. It may be noted that the following techniques may or may not be followed in the given order contingent on the data. The very basic step before applying following techniques is to execute univariate analysis for all the variables to get observations frequency count as well as missing value count. Variables with large proportion of missing values can be dropped upfront from the further analysis.

Machine Learning based Hybrid Method for Surface Defect Detection and Categorization in PU Foam

Foam making is an important industry with foam mattress being one of the main end products. To ensure quality production, their manufacturing is subject to very strict safety checks. There are many types of defects that can arise during their manufacturing process such as holes, cuts, mis-configuration in the material etc. Manual defect detection leads to inaccuracy and increases the chances of defects going unnoticed. This further reduces the process efficiency and creates adverse affect on overall production. To counter this situation, this research paper proposes a hybrid approach that identifies defects present in and on the surface of PU (Polyurethane) foam material. Both supervised and unsupervised approaches are used to classify the PU foam into two categories: normal and defective, considering the type of defect. Then the reliable model is selected according to the precision rate of both the mode
Join Catalant to View Shailendra’s Full Profile

Kickstart your next project with our network of 70,000 experts

Work with Shailendra directly

Take your organization from Strategy to Execution

Join Catalant to View Full Profile

Experts are not affiliated with or endorsed by Catalant Technologies, Inc. By using Catalant Expert Storefronts, you agree to our additional terms.