Solving Stochastic Models in Operational
Research
S.P.Mukherjee
Calcutta University
35
B.C.Road, Calcutta 700 019
36
E-mail: csri@satyam.net.in
Abstract
Stochastic models used in various
areas of Operational Research usually
consider a deterministic objective function , even when some of the elements
are random or some probabilistic constraints are involved. The decision
variable (s) is (are) invariably treated as non-random. The distribution
of a random objective function is
rarely worked out and is
summarized by some linear or non-linear deterministic function like
the expectation or the variance . In some problems, it may be important to
derive the distribution and to minimize (maximize) some quantile or exceedance
probability. Similarly, it may be worthwhile to treat the decision variable as
random and to derive the optimal distribution of this variable as the solution
to the problem, in the hope that this will yield result. Further,
distributions of random variables involved in such models cannot always be
taken as completely known and we may require to find out an estimate of the
solution to the model by considering the corresponding estimates of distributions.
Some of these issues have been discussed in the context of the classical
Newsboy problem in Inventory Management.
Operations Research –An Applied Science?
B. Majumdar
SQC & OR
Unit
Indian
Statistical Institute
7, S. J. S
Sansanwal Marg
New Delhi 110
016
Abstract
Operations Research (O.R.) was born out of real life
problems encountered in the Military Operations during the World War II. In its
initial phase of development, various optimization problems under resource
constraints as encountered by the industry, business and commerce were
addressed and suitable models were developed. This established O.R. as a
predominantly problem oriented applied science in the sphere of management
decision making in diversified spectrum. In the next phase of
its development, applied mathematicians, seeing the challenging
opportunities, entered the fray in large numbers and developed theories, not
necessarily having their roots in real life problems and mostly based on
hypothetical formulations under various assumptions. With this, the subject
assumed the status of an important
branch of applied mathematics. Industry and commerce could not utilize
much of this knowledge due to high level of mathematical content and lack of
computing facilities. Then came the era of super fast computers with large
memories and user friendly computer packages. Operations Research
specialists started developing problem
specific algorithms and software for their solution. The subject started
regaining its character of an applied science though the spate of theoretical
research also remained unabated, thereby, strengthening the theoretical base of
the subject. With the emergence of the service, IT and IT enabled sectors,
establishment of large number of globally operative players with diverse
interests, emergence of a highly competitive market and speedy changes in the
technological front, the demand for OR is bound to take a quantum jump.
Operations Research specialists should take up these opportunities and help the
industry and commerce by formulating and providing solutions to their highly
intricate business problems while continuously thriving for enrichment in the
theoretical front. In order to maintain the problem oriented character of O.R.,
one can no longer afford to keep himself
away from the business scenario.
Optimization
Based Decision Support System for Integrated Supply Chain
Planning for a
Process Industry
Goutam Dutta Robert Fourer
Indian Institute of Management Northwestern University,
Ahmedabad-380 015 Evanston, IL –60208, USA
E.mail: goutam@iimahd.ernet.in
Abstract:
We introduce how a generic multi-period optimization based
decision support system (DSS) can be used for strategic and operational
planning in a company with seven fundamental elements, namely, Materials,
Facilities, Activities, Times and Storage-Areas, Supply Sources, Demand
Sinks. This DSS which optimizes the company's
activities over multiple-time horizon, having a multi-material, multi-facility,
multi-activity system, requires little or no managerial knowledge of
optimization techniques. We discuss the issues of interface design, data
reporting and updating. The model has been tested with the real data of two
process industry: A steel company in the USA and a pharmaceuticals company in
India. The results demonstrates a significant potential for profit improvement.
Serial
Multi-Echelon Inventory Systems
N.
Selvaraju
Department of Mathematics
Indian Institute of
Technology Guwahati
Guwahati - 781 039, Assam,
INDIA
nselvaraju@iitg.ernet.in
Abstract
The control of
inventories of physical objects has been the theme of literally thousands of
technical and trade journal articles during the last 100 years or so. In many
inventory systems, items are stocked at more than one place. For example, in a
distribution network items are stocked both at a central location and at a
local warehouse. Similarly, in a production network, items are stocked at
various stages of production. Multi-echelon systems represent such networks of
activities that are often encountered in practice. Instead of treating each
activity in isolation, multi-echelon systems take a holistic view of all the
activities. In view of the importance of inventory management in supply chains,
which are often consist of a complex network of activities, and the
availability of high performance computers, more and more of such systems are
becoming tractable. A common approach to study such systems has two components.
The first component is to characterize the service
performance of
the multi-echelon system under a specification of stock levels. The second
component is to determine the optimal levels that minimize inventory costs
subject to service level constraints.
There has been
a surge of interest recently in supply-chain management, as many
companies have
realized substantial benefits by restructuring their supply chains. To do this,
one must understand how alternative control schemes and system designs work. In
this talk, we will focus on a simple and special structure, namely, a series
system where items/stages are linked mainly through supply-demand
relationships. The root problem here, as in a single-stage system, is the
combination of demand uncertainty and supply lead-times. We must decide not
just how much inventory to keep, but also where to position it. Assuming that a
base-stock policy is implemented to control the inventories, and with a variety
of model assumptions, we will try to get answers to certain questions that have
managerial implications.
An EOQ Model
with Fuzzy Lead-Time Over a Finite Time Horizon Under Inflation and Time Value
of Money
Jayanta Kumar Dey1,
Samarjit Kar2, Manoranjan Maiti.3
1Department of
Mathematics, Mahishadal Raj College, Mahishadal, East-Midnapore-721628, India.
2Department of
Mathematics, Haldia Institute of Technology, Haldia, East-Midnapore-721657,
India.
3Department of
Applied Mathematics with Oceanology and Computer Programming, Vidyasagar
University, Midnapore, 721102, India
Abstract
The real-world inventory control models are normally not
free from lead-time and is developed for a finite duration of time. The
lead-time is again imprecise in nature and money value changes with time. In
this paper, we have developed an inventory model with constant demand and
imprecise lead-time under inflation and time discount over a prescribed finite
time horizon. Shortages are allowed and fully backlogged. Ordering cost is
linearly order-level dependent. The imprecise lead-time is assumed to be
represented by different types of membership functions, linear or quadratic
depending upon the prevailing supply condition of item. The imprecise parameter
is first transformed to corresponding interval number and then following the interval
mathematics, the objective function has been transformed into an equivalent
multi-objective deterministic inventory problem defined by the right limit,
left limit, center of an interval. Finally, the multi-objective problem has
been solved by interactive decision-making procedure. The model has been
illustrated numerically and the Pareto-optimum results are presented in tabular
forms.
On some matrices associated with the transportation
problem
R.B.Bapat
Indian
Statistical Institute
New Delhi, 110016
e-mail:
rbb@isid.ac.in
Abstract
We consider some matrices associated with the cells corresponding to a basic feasible solution
in a transportation tableau. The matrices include the l1 distance matrix D of the cells and
a certain network matrix afforded by the tree corresponding to the cells. We
review some recent results concerning these matrices. As an example, the
determinant of D depends only on the
number of origins and destinations. An explicit formula for D-1 can also be given.
A search algorithm along an
incomplete binary tree
S. Dutta Arindam
Sengupta
Stat-Math Unit Department of Mathematics
Indian Statistical Institute Indian Institute of Technilogy
203 B. T. Road
Guwahati 781039
Calcutta 700 108 Assam
E.mail: sudipta_r@isical.ac.in E.mail: arindam @iitg.ernet.in
Abstract
Bipartite Matching Problem in Manufacturing
Amit K Biswas D.K.Manna
Indian Statistical
Institute
110, Nelson Manickam Road
Aminjikarai
Chennai - 600 029.
Abstract
Bipartite Maching probelm is an
well studied area both from Graph
Theoretic and Game Theoretic point of views. These problems typically
arise in an industry whenever there are two components that are to be assembled
to produce a toleranced quality characteristic, which is a function (most often linear) of the two individual quality
charateristics. Particularly in a scenario when the individual components are
so expensive either from the material or the value addition angle that the cost
of a variable inspection becomes negligible, the manufacturer would not like to
scrap even a single component if it can
be matched to produce another acceptable assembly. In
this paper we explore several
diferent models proposed in the literature
like the Gale Shapley stable matching alogorithm, the Assignment
problem model and also the simple Maximal
Matching algorithm.
Keywords: Stable Match, Maximal Match, Assignment Problem.
Stable Outcomes For Contract Choice Problems
Somdeb Lahiri
School of Economic and Business Sciences,
University of Witwatersrand at Johannesburg,
Private Bag 3, WITS 2050,
South Africa.
Email:
lahiris@sebs.wits.ac.za, Or lahiri@webmail.co.za
Abstract
In this paper, we consider the
problem of choosing a set of multi-party contracts,
where each coalition of agents
has a non-empty finite set of feasible contracts to
choose from. We call such
problems, contract choice problems. We provide
conditions under which a
contract choice problem has a non-empty set of "stable"
outcomes. There are two types of
stability concepts we study in this paper:
cooperative stability and
non-cooperative stability. The cooperative stability
concept that we invoke here is
the core. We also show, that a simple
generalization of the Deferred
Acceptance Procedure with men proposing due to
Gale and Shapley (1962), yields
outcomes for a generalized marriage problem,
which necessarily belong the
core. The non-cooperative stability concept that we
study here is individual
stability. The final result of this paper states that every
contract choice problem has a
non-empty weak bargaining set.
The Implications of the Law of Diminishing Returns for the
Average Shadow Price
Saral
Mukherjee* and A. K. Chatterjee1
Indian Institute of Management Ahmedabad, Vastrapur, Ahmedabad
380015, India,
Indian Institute of Management Calcutta, Diamond Harbour Road,
Calcutta 700104
E.mail:
saralm@iimahd.ernet.in
Abstract
The economic significance of the
average shadow price for integer and mixed integer programming problems has
been established by researchers (Kim and Cho 1988), (Crema 1995). In this paper
we investigate the interrelations between the Law of Diminishing Returns and
the average shadow price concept. We introduce the concept of the marginal unit
shadow price to deal with the integer programs where the right hand side
resource availability can only be varied in discrete steps. We show that for
integer programs, a sufficient
condition for the marginal unit shadow price to equal the average shadow price
is that the Law of Diminishing Returns should hold. The polyhedral structures
that will guarantee this equivalence have been explored. Identification of the
problem classes for which the equivalence holds greatly simplifies the existing
procedure for determining shadow price for such integer programs.
Keywords: Integer
Programming, Average Shadow Price, Law of Diminishing
Returns
Bertrand-Edgeworth
Equilibrium With a Large Number of Firms
Prabal
Roy Chowdhury
Indian
Statistical Institute
Delhi
Centre
7 -
S.J.S.S. Marg,
New Delhi
- 110016, INDIA
Abstract
We examine a model of price
competition where the firms
simultaneously decide on both price and quantity, and firms are free to supply
less than the quantity demanded.
We demonstrate that if the
number of firms is large, then, for a large class of residual demand functions
(which includes the parallel residual demand function), a unique equilibrium in
pure strategies exist. We find that most of the results go through if the firms
are asymmetric, or produce to order. Moreover, we show that the `folk theorem'
of perfect competition holds.
JEL
Classification Number: D43, D41, L13.
Key words: Bertrand
equilibrium, pure strategy, non-manipulable residual demand.
Integrating the Theory of constraints and Activity-based
costing in a light engineering company
1 Engineer-Vendor Development.
M/s Orient fans
2Mechanical Engineering Department
3 Production Engineering Department
Jadavpur University
– Kolkata
E. mail:
mse.fbd@orientfansindia.com
Abstract
In general Activity-based
costing (ABC) and the Theory of constraints (TOC) represents alternative
paradigm. Through this paper a serious attempt has been made to integrate basic
concept of ABC and TOC into a cohesive model. This paper explains the concept
of process mapping. Integrating the basic concepts of ABC with the principles
of TOC, provides an expanded framework for understanding the economic
consequences of production related decision-making. This model provides the
valuable financial information necessary for informed operational and strategic
decisions about process improvement & resource acquisition as well as
consumption in an organisation. Finally this model help organisation to decide
optimal product mix to sustain in this
competitive environment, considering number of factors e.g. quality
improvement, cycle time management and technological investment decision with
justification.
Keywords: Activity-based costing (ABC), Theory of
constraints (TOC), Process Mapping.
Indian
Statistical Institute,
SQC
& OR Unit, Street no. 8, Habshiguda, Hyderabad 500 007
Abstract
With the help of Information
Technology and the computing power available today, Optimization Techniques
provide very efficient and pragmatic solutions to many decision making and
management problems. Agricultural irrigation uses considerable volumes of water
in our country and efficient management of water resources is utmost important.
There is need for taking full advantage of today’s information technology and
the computing power for establishing good planning and management systems for
optimum utilization of the water resources in our country. This project is
about the development of decision support system for the management and optimum
utilization of water resources in irrigation projects. It addresses the problem
of providing a water distribution and scheduling plan for given requirements of
agricultural needs and the availability of water at various points in irrigation
system over a time horizon. As part of this project, the Netlab software package has been developed which will aid as a
decision support system for the irrigation projects. Netlab has two main components the first of which is a network
solver meant for solving general minimum cost network flow problems. The second
component is the interface software between the network solver and the users in
the irrigation department. We wish to broaden the scope of application of Netlab to many other areas of
optimization.
A.P.Burnwal1, Bikas Prasad2, D.Patel3
1.
Department of Mathematics, B.I.T., Sindri, Dhanbad, Jharkhand –
828123
E-Mail: apburnwal@yahoo.com
2.
Department of Mechanical Engg., N.I.T., Jamshedpur, Jharkhand –
831014
E-Mail: bikasprasad@yahoo.com
3.
Deptt. of Production Engg. & Mgmt. N.I.T.,Jamshedpur,
Jharkhand – 831014
E-Mail:
dharmendrapatel@yahoo.com
Abstract
Productivity
deals with effective utilization of resources in producing goods and/or
services. There are several
productivity improvement tools to be implemented in industrial
organization. The priority of
implementation of tools depends on several factors such as material cost,
labour cost, investment, implementation time and payback period, etc. All these factors are considered as
imprecise or vague. Due to imprecise
nature of these factors the concept of fuzzy set theory can be applied to solve
the productivity improvement problem.
This
paper presents a weighted maximin model of productivity improvement by
articulating suitable weights to be assigned to the membership functions to
reflect the relative importance of the corresponding fuzzy objectives. A sample problem has been solved using the
proposed model.
Application of Quantitative techniques
for optimum production planning – case of a cooking coal mining organization
linked to a steel plant in India.
Bholanath
Sarkar
The Tata Iron
and Steel Co. Ltd.
BOX NO : G-43 ,
Jamshedpur., INDIA
Pin : 831001, FAX 0657-2423319
E mail: b.sarkar@tatasteel.com, sarkarbns@rediffmail.com
Kalyan Kumar guin
Vinod Gupta
School of Management,
Indian
Institute of Technology , Kharagpur .
INDIA , Pin :
721302
Abstract
This
paper describes the methodology of formulating the optimum production plan
using quantitative techniques like Linear Goal Programming (GP) and Linear
Programming (LP) for a captive underground coal mining organization in India,
,involved in production of coal from underground mines, washing in the coal washeries for the quality
improvement and dispatch to the steel
plant. Considering the priority of management, the GP model generates the
optimum production plan for the overall organization with values of
underachievement and overachievement with
respect to each goal . The Output of GP model is the input to LP model
which in turn generates the
single set of optimum production values at the section level for the individual
collieries, considering quality of coal as
the major constraint..
Key Words :Goal
Programming, Linear Programming, Target Cost, coking coal.
A Travelling
Salesman Problem with multiple Job facilities
S Das N Ahmed
Department of
Statistics Department of
Statistics
Dibrugarh University DHSK College,Dibrugarh
E.mail: shila_d@lycos.com
Abstract
In this paper we have considered
a variation of usual traveling salesman prob. introducing a more realistic
situation. The problem can be described as follows:
There are N stations to be
visited and M distinct jobs to be performed by a salesman. The distance between
each pair of stations and facilities for jobs at each station, are known. The
salesman starts from a station (home) and returns to it after completing all
the jobs. The salesman need not visit all the stations and should not visit a
station more than once. The problem is to find a tour of the salesman such that
the total distance traveled is minimum while completing all the M jobs. We have
considered a Lexicographic search approach to find the optimal solution.
Key words: Traveling salesman prob., Lexicographic search,
Dynamic
Programming, Lower bound.
Analysis of various Solutions
Approaches to the Travelling Salesman Problem
Department of Industrial Engineering
College of Engineering, Anna University
Abstract
Numerous
Exact and Heuristic Approaches have been suggested for solving the Travelling
Salesman Problem. The time requirements and the quality of the solution
obtained within a given time ‘t’ are important factors while selecting an
appropriate Algorithm/Heuristics. The time complexity increases with increase
in the number of cities ‘n’. For Exact
Algorithms, higher values of ‘n’ would lead to a drastic increase in the number
of computation and hence, the total computation time (the travelling salesman
problem being an NP- Hard problem), making it an infeasible approach for large
problems.
This
paper looks at various Heuristics for solving the Travelling Salesman
Problem. Comparisons have been made
both on the basis of time requirements and quality of solutions obtained and
also looks at improving these heuristics by combination with other
heuristics. The improvements made are
considered either by giving priority to time requirements over the quality of
solution obtained or going in for a higher quality solution at the cost of
marginally increased computation time or going in for an improvement in both
aspects.
Key
Words: Travelling Salesman
Problem,Heuristic,Exact Algorithm
Solution to Transportation Problem having degeneracy,
A New Approach.
S.S. Kulkarni
and H. G. Datar
Willingdon College, Sangli
Abstract
New method requiring less number of iterations for solving
Transportation Problem with degeneracy is proposed. It consists of using a new
method for IBFS. The Pal and Basu’s (5) general method is used; it helps in the
reduction in the number of iterations. A numerical example to illustrate the
procedure is also given.
Optimization of
Execution Time of Distributed Processing System Through Tasks Allocation
Pradeep Kumar
Yadav
Central Building Research Institute
Roorkee-247667,
Uttaranchal, INDIA
E.mail : prd_yadav@rediffmail.com
Avanish Kumar
and Anju Rani Gupta
Abstract
Our day-to-day
experience reveals that processor distance influences the overall task
completion cost in any communication system, for instance, in the case of
telecommunication network. It is well known fact that a person has to pay more
for talking to person sitting at a long distant place. The case of Distributed
Processing System [DPS] is similar to that of a telecommunication network in
this aspect. In a DPS, a task is
allocated to a processor in such a way that extensive Inter Task Communication
is avoided and the capabilities of the processor suit to the execution
requirements of the task. The model discussed in this paper provide an optimal
solution for assigning a set of “m” tasks of a program to a set of “n” processors where m > n in a
Distributed Processing System with the goal to maximize the overall throughput
of the system and allocated load on all the processors should be balanced.
The Execution
Cost [EC] and Inter Task Communication Cost [ITCC] of the tasks are presented
by arrays in the form of Execution Cost Matrix ECM (,) and Inter Task
Communication Cost Matrix ITCCM (,) respectively whereas the Inter Processor
Distance is represented by means of Inter Processor Distance Matrix IPDM (,).
The present allocation policy involves stepwise modification of ECM (,) and
ITCCM (,) by making the clusters tasks. Some of the tasks may not involve in
any cluster, those tasks treated as independent tasks. The mathematical
programming approach has been used to determine the optimal allocation of
tasks. The several sets of input data are considered to test the complexity and
efficiency of the algorithm. It is found that the algorithm is suitable for
arbitrary number of processor with the random program structure and workable in
all the cases.
Keywords:
Optimization, Distributed Processing System, Inter Task Communication
Time, Execution Time, Inter Processor Distance, Task Allocation
Electrical
Networks and Mathematical Programming
H.Narayanan
Indian Institute of Technology
Bombay
Powai, Mumbai 400076
Abstract
The theme of this talk is (a)
problems that arise in the course of understanding electrical network behaviour
give rise to good combinatorial optimization problems;
(b) an important branch of
mathematical programming, namely the `linear
complementarity problem', can be
posed as an electrical network solution problem and solved as such.
Under part (a) we will focus attention only on the very
important `hybrid rank problem' [Given a graph G, find a partition of the edges
of G into S and T such that the sum of the rank of a subgraph on S and the
nullity (number of edges in the cotree) of the contraction on T (i.e., of the
graph on T obtained by fusing the endpoints of edges in S and then deleting
them) is a minimum]. This problem arises when we try to minimize the number of
variables in mixed (partly nodal and partly loop) variable analysis of
electrical networks. Focusing on this problem leads us to
i) the matroid intersection
problem
ii) the polymatroid membership
problem
iii) the notion of the Dilworth
Truncation of a submodular function,
and finally
vector space.
There is, thus, strong evidence
that attempting to understand electrical
network behaviour is a good way
to generate good combinatorics.
Joydeep
Dutta
Department
of Mathematics
Indian
Institute of Technology, Kanpur-208106
E.mail:
jdutta@iitk.ac.in
Abstract
During the early 1960's there
was a growing realization that a large number of optimization problems which
appeared in applications involved minimization of non-differentiable functions.
One of the important areas where such problems appeared
was optimal control. The subject
of nonsmooth analysis arose out of the need to develop a theory to deal with
the minimization of nonsmooth functions. This development also had a tremendous
impact on the theory and methods of
optimization.
In this article we trace the
dramatic development of nonsmooth optimization and its applications to
optimization in finite dimensions. Beginning with the fundamentals of convex
optimization we quickly move over to the path breaking work of Clarke which
extends the domain of nonsmooth analysis from convex to locally Lipschitz
functions.
We discuss the notions of Clarke directional derivative and the
Clarke generalized gradient and also the relevant calculus rules and
applications to optimization. While discussing both convex optimization and
locally Lipschitz optimization we also try to blend in the computational
aspects of the theory wherever possible. This is
followed by a discussion of
nonsmooth geometry of sets with nonsmooth boundaries. This includes the
definitions of various tangent and normal cones and their applications to
nonsmooth optimality conditions. In all these above developments which extends
the results from the convex domain to the non-convex domain rely heavily on the
tools of convex analysis. The move away from convexity was achieved by
Mordukhovich and we describe nonsmooth sequential analysis due to Morduckhovich
following the study of tangent and normal cones. We study the limiting normal
cone and the limiting subdifferential and their applications in
nonsmooth optimization and also
highlight the extremal principle of Mordukhovich and its importance. We then
move on to a parallel development in
nonsmooth optimization due to two Demyanov and Rubinov called Quasidifferentiable
optimization. They study the class of directionally differentiable functions
whose directional; derivative can be represented as a difference of two
sublinear
functions. On other hand the
directional derivative and also the Clarke directional derivatives were
sublinear functions of the directions. Thus it was thought the the most useful
generalizations of directional derivatives were sublinear functions of the
directions. Thus Demyanov and Rubinov made a major conceptual change in
nonsmooth optimization. In this section we define the notion of a
quasidifferential which is a pair of convex compact sets. We study some
calculus rules and their
applications to optimality
conditions. We also study the interesting notion of Demyanov difference between
two sets and their applications to
optimization.
E-mail: bector@ms.umanitoba.ca
Department of Business Administration, The University of
Manitoba, Winnipeg,
Manitoba, R3T
5V4, Canada
And
S. Chandra1 and Vidyottama Vijay2
E-mail : 1
chandras@maths.iitd.ernet.in and 2 mar99005@ccsun50.iitd.ernet.in
Department of Mathematics, Indian Institute of Technology,
Hauz Khas, New Delhi – 110 016, India
A bi-matrix game with fuzzy goal is
shown to be equivalent to a (crisp) non-linear programming problem in which the
objective function as well as all but two constraint functions are linear. This equivalence is further extended to
bi-matrix games with fuzzy goals and fuzzy pay-offs by employing a suitable
ranking function (defuzzification) function.
Further, certain other studies reported in the literature are also
considered so as to clear bring out similarities/differences with the present
model.
Keywords : Fuzzy goal, fuzzy
pay–offs, fuzzy non-linear programming problem, fuzzy number.
SCM
Practices in India: A study
Sadhana Ghosh and Rahul Altekar
NITIE, Mumbai
Abstract
Indian
Manufacturing Industries are gearing up to meet challenges generated after a decade of
reforms for the new millennium. In most of the organizations, various functions like purchasing, warehousing,
production, transportation etc. are seen as integrated processes. The chain
activities of these processes are of immense importance to the overall Supply Chain Management (SCM)
startegy. Literature on SCM strategy favours demand driven supply of goods in
an environment of “high trust and open relations”. This demands cooperation and
mutual committment between suppliers and customers. With highly reliable
suppliers the organisations will be able to not only bypass the slow and costly
efforts to build one’s own capabilities but also access new opportunites.
SCM, Lean Six Sigma approach suggest to build long term
supplier partnership. However, from literature we find that in India, the trend has been on selection and performance evaluation of the suppliers. Hence a
need was felt to study the contemporary status of supply chain strategy in
Indian organizations through survey.
In this survey
based study an effort has been made to understand the current trend in supplier
integration in the manufacturing sector. The empirical survey is carried out to
measure the Buyer-Supplier performance in terms
of materials planning efficiency, degree of collaboration, transactional
proceduers, and supply chain practices through various instruments from the
literature.
Initially we conducted a pilot study to find the
effectiveness and suitablity of the instruments in measuring the study
variables. Subsequently the validated instruments were administered on 529
subjects. Out of which 490 were studied further. The age of the subject selected
was restricted to 25 to 50 years, and the experience was restricted between 5
to 20 years in materials function. This filtration was to capture the pulse of
experience and gut feel of the subjects. The manufacturing sector included
industries from Automobile, Textile, Chemical, Food, Pharmaceutical, Cement,
Engineering, Foundry, Forging, and Projects. Buyers from these organizations
were selected on the basis of type of the items they deal with namely, raw
materials, packing materials, finished goods, and spare parts. Minimum three
buyers were randomly selected from each organization and questionnaire was
administrated personally. The data gathered by this process was subjected to
the statistical analysis to understand the
overall pattern of existing buyer-supplier relationships. The study helped in
knowing the areas of weaknesses in supply chain management practices in India.
The
study revealed that the companies are partially aware of partnership concepts
and its benefits, but reported doubts on successful implementation. The
phasewise Supplier Integration program is completely unknown to the subjects.
Morever companies have not reported any other reference model for long term
partnership to follow in India. Thus ‘Supplier Integration’ is not adopted
systematically.
Analysis results also showed that the existing relationships are of ‘Arms Length’
(Lewis-1995). Though the findings highlighted existing hostile relatioships of
buyers with suppliers, it also revealed the fact that companies are interested
in knowing the methodology for modeling supplier integration in Indian context.
We also observe the signaficantly different relations for
the different types of items.
The study strongly recommends development of a model that
includes phasewise implementaion of trust, integration, investment and
alingment strategies. This will ensure that the operating risks will always
remain lowest during implementation along with the flexibility to select or
deselect the various options of supplier integration based on scientific
method.
1 Ravi Gor and 2 Nita H Shah
1.
Dept. of Mathematics, Smt.
D. J. Shah Parivar Science College, Mithikui, Dholka-387810, Gujarat, India.
E-mail: ravigor@hotmail.com.
2. Dept. of Mathematics, Gujarat University,
Ahmedabad-380009, Gujarat, India.
E-mail : nita_sha_h@rediffmail.com.
Abstract
The
classical Harris and Wilson’s inventory model assumes that the depletion of
inventory is due to a constant demand rate. However, in real-life situations,
inventory loss may be due to deterioration. Then problem for decision makers is
how to control and maintain inventories of deteriorating items. During the past
few years, many researchers have studied inventory model for deteriorating
items. Second stringent assumption in traditional inventory economic order
quantity model was that the purchaser must pay for the items as soon as it is
received by the system. However, in practice, the supplier may provide a credit
period to their customers to settle the amount. Thus, the delay in payment to
the supplier is a kind of price discount. Since paying later indirectly reduces
the purchase cost and it can encourage customers to increase their order
quantity.
From a
financial point of view, an inventory represents a capital investment and must
compete with other assets for a firm’s limited capital funds. Thus, it is
important to consider the effects of inflation on the inventory system.
An inventory model in which the supplier provides the
purchaser a permissible delay of payments is developed when units in inventory
are subject to constant rate of deterioration and demand is selling price
dependent. Shortages are not allowed and effect of the inflation rate is
considered. The optimal solution is characterized to optimize the net profit on
a finite planning horizon. An easy-to-use algorithm is given to find the
optimal selling price, the optimal order quantity and replenishment cycle time
to maximize the net profit. At the end, a numerical example is given to
illustrate the theoretical results and sensitivity analysis of parameters on
the optimal solutions is carried out.
Key Words: EOQ models, inflation, delay in payments, deteriorating items, selling
price dependent demand.
An inflationary inventory model with time dependent demand
with weibull distribution deterioration and partial backlogging under
permissible delay in payments
Department of Mathematics
University of Kalyani, Kalyani
West Bengal, Pin- 741235.
E-mail:
manjusri_basu@yahoo.com
Abstract
This paper proposes to present a
general production inventory model with due consideration to the factors of
time dependent partial backlogging and weibull
distribution deterioration. It
also takes into account the impact of inflation, time-dependent demand and
permissible delay in payments to prepare a standard
and acceptable model with the
expectation that the model will work well and be perfect as far as practicable.
An EOQ Model With Time Dependent Deterioration Under
Discounted Cash Flow Approach When Supplier Credits Linked To Order Quantity
1Bhavin J. Shah,2,3
Nita H. Shah and 4 Y.K. Shah
bhavinj_sha_h@yahoo.com
Gujarat University, Ahmedabad – 380009, Gujarat, India.
Abstract
This article deals with an inventory model under a situation in
which the supplier offers the purchaser some credit period if the purchaser
orders a large quantity. Shortages are not allowed. The effect of the inflation
rate on purchase price, ordering price and inventory holding price, time
dependent deterioration of units and permissible delay in payment are
discussed. A mathematical model is developed when units in inventory are
subject to time dependent deterioration under inflation when the supplier
offers a permissible delay to the
purchaser if the order quantity is
greater than or equal to a pre-specified quantity. Optimal solution is
obtained and algorithm is given to find the optimal order quantity and
replenishment time which minimizes the total cost of an inventory system in
different scenarios. The paper concludes with a numerical example to illustrate
the theoretical results and interdependence of parameters is studied on the
optimal solutions.
Stochastic modeling and analysis of a blast furnace
system of blast furnace area
of an integrated steel plant.
Pt. RAVISHANKAR SHUKLA
UNIVERSITY , RAIPUR (M.P.)
email – profsksingh@hotmail.com
Abstract
This paper deals with the
stochastic modeling and analysis of a
blast furnace system of an integrated steel plant.Blast furnace system
comprised of blast furnace ,drilling machine,mudgun machine and two
skipps.Drilling machine is taken as statistically perfect due to having
substitute arrangements. Skipps are electrically operated buckets which are
used to pour the specific raw material inside the furnace.Both the skipps work
simultaneously, one from the left side of the furnace and other from the right
side of the furnace. If one moves in upward direction then other gets down
automatically. If any of the skipp fails, then working of both the skipps get
interrupted. Skipp takes atmost 5-6 hrs. in its repairing. There hardly arises a situation when shut down is
being performed due to delay in repairing of skipps.If any such situation
arises,then to avoid the possibility of shut-down,skipps are replaced at that time and repair takes place
afterwards. Priority in repair is given to mudgun machine as any delay in
repairing the mudgun machine may cause shut-down,thus resulting in loss of
production.There is a provision of capital repair facility for the whole
system.Capital repair is a planned repair. It takes place quarterly and
takes about one month or more depending upon the condition of the equipments. Failure
time distribution of each unit is taken to be negative exponential , whereas
all the repair time distributions along with preparation time distribution are
taken to be arbitrary.Using regenerative point techineque, several system
characteristics which are useful to the system managers and designers
are evaluated . Finally, some graphs are plotted in order to highlight
the important results.
Key-Words : Mean time to system failure,
reliability,availability,capital
Statistical
decision making and systems management
R.N. Subudhi
Berhampur University, Orissa
Abstract
Statistical decision theory has
immense practical use value. Its
application in the field of management science has made the decision
making function of extremely busy managers more systematic and efficient, now.
It has helped the management of industries and other business organization to
fathom newer heights of success now, through the timely and effective decision making. The paper emphasizes the role of
statistical decision theory in the functional division of Management
Information System (MIS) of various
business organizations. Discussing the concept of systems management and Decision Support System (DSS), a critical analysis
of various supporting information
systems has been made. Scope and challenges have also been
discussed in the paper.
Optimal Design of Train Timetables to Maximize Schedule
Reliability
* South Eastern Railway,
Kolkata
** Indian Institute
of Management, Calcutta
Train Timetables are pre-determined schedules designed to
aid the customer and the train scheduler. The proper design of timetables is
the key to the maintenance of punctuality of trains. Timetables should be
robust to minor perturbations and should allow minimum propagation of delays across trains and
across the network. A not so robust timetable may lead to inefficient and
wasteful utilization of resources, which may hamper the proper maintenance and
thereby lower the reliability of assets. Such a timetable will also project a
poor image to the customer followed by consequent loss of goodwill, since
consistent adherence to timetable schedules is an indicator of performance of
railway networks. Timetables are
presently prepared manually either by train-path diagrams or by making marginal
changes to existing timetables. The present methods can only test the
feasibility of the design. There are no methods for design of timetables to
optimize timetable robustness nor has any metric been defined to measure
timetable robustness. Timetable design affects the planning process at all
levels – strategic, tactical and operational and is of major consequence in
deciding the bottom line of the railways. Surprisingly however, this aspect has
received scant attention, so far as improving on the manual methods used at
present which only test for the feasibility and not the optimality. Research on
the area of construction of timetables using mathematical modeling on very
small scale have been reported. However, the problem of timetable design to
maximize schedule robustness for a large railway has not been tackled so far,
and this paper is a maiden effort in this direction. The design of timetables
involves determination of the departure times of each train from its
originating station and all subsequent stations where the train is scheduled to
halt. Determination of departure times of trains, involves determination of the
headway time, that is the
time intervals between departures of immediate consecutive trains based on the
order of precedence of trains.
The
headway time of each train consists of a minimum headway time and
a slack time. In this paper, the relationship of minimum headway
time required for each train to section running time has been derived and a
method for deciding order of precedence of train departures has been proposed.
Further, the relationship of slack time to headway time and section running
time has been derived. Based on these derivations, a metric for robustness
termed as Total Slack Index (TSI) is proposed as a function of
slack times and section running times.
In order to ensure the application of this method particularly to large
railways, wherein huge problem sizes may exceed tolerable limits of available
computing power, the concept of routes has been proposed which
reduces the problem size without sacrificing the information and constraints
involved. The timetable design problem has then been formulated as a
mathematical model and demonstrated on a test section to derive train
timetables with maximum schedule robustness.
Keywords: Railways, timetable design, schedule reliability.
Development of a Service Level Prediction model
Subir Kumar Ghosh
Ashley Nigel Pereira
Srinivas.S
Progeon (an infosys company)
Abstract
Progeon
Limited(An Infosys company), a leading BPO company executes a number of data and call center processes for various
clients. When a job is outsourced, a Service
Level Agreement(SLA) is signed
between Progeon and its clients which form the bedrock of their engagement.
Performance
parameters are part of that SLA and typically include the cycle time , waiting
time of transaction in queue, Abandonment rate( may be in terms of average, or
some percentile).
Progeon wants to have a model that can
predict different service level parameters ( e.g. Average cycle time, Average
waiting time) for a given staffing pattern and an arrival pattern and a service
pattern. This will facilitate taking strategic decisions on different SLA
parameters at the time of signing the contract. We have developed queuing model
to estimate the SLA parameters, which will be explained in this paper.
The
whole system is looked on as multi-workstation
composite queuing system, consisting of number of internal sub-system and number of external sub-system, connected in series-parallel combinations. Each system performs an activity,
called “Service”. Each internal
sub-system (looked as queue system) consists of a number of parallel servers,
fed by a single queue and follows FCFS discipline or priority discipline or
both. Each external system is not part of Progeon and therefore, Progeon has no
control over no. of servers or service time of that activity.
The
whole flow diagram of transaction is considered as a “directed graph”. Each node represents a state of the transaction.
An arc between any two nodes represents the activity of the sub-system. Each
state i ("i=1,…n) is
designated by node i. State 1
represents the initial state and
State n represents the absorbing
state.
The
basic principle is based on simulation. Any arrival in the system is termed as Transaction. First, an arrival time is
simulated from the given arrival pattern and assumes the state 1. From each
state i (" i=1,…..,n-1),the next
state j is simulated from the state transition probability matrix and once the
state transition i->j is decide, it will simulate a service time
for that state transition from the given service pattern. If the activity is an internal activity, the
transaction will join in queue and will be assigned a service time based on
availability of servers and priority of the transactions. If the activity is
external, it will be assigned immediately a service time. The process will end
, once the transaction assumes the absorbing state n. The different SLA
parameters(can be calculated easily, once transaction-wise parameters e.g. cycle time, waiting time
will be calculated.
In
addition to the above, a Finite-Queue system (eg. call center), has another consideration
, termed as Reneging in queue. In this case,
the given reneging (abandonment) pattern decides if the transaction is still in queue or has
been abandoned before it is taken up for servicing. If the simulated reneging time is smaller than simulated waiting
time of the transaction, a service time will be assigned to it . Otherwise, the
transaction is ended and no further state transition will be experienced. This
way, the abandonment parameters against
the given inputs can be estimated.
Equivalence
of P and Q-properties of a Linear Transformation MA in Semidefinite
Linear Complementarity Problem
D. Sampangi Raman
Indian Statistical Institute
Chennai
Abstract
In
Semidefinite Linear Complementarity Problem set-up, if a linear transformation L
has the P-property, then it also has the Q-property; this is a
consequence of a result due to Karamardian [1976]. In general, the converse of
the above result need not be true. That is, if L has the Q-property,
then L need not posses the P-property. However, for the Lyapunov
transformation LA and for the Stein transformation SA,
it has been proved that the P-property
is equivalent to the Q-property. In this paper, we address the following
question for the transformation MA: For the transformation MA,
can we assert that the P-property is equivalent to the Q-property? We answer this question
partially in the affirmative, in two cases: (i) when A is symmetric and (ii)
when A R2X2. In general, it is not known whether
the Q-property will imply the P-property for MA.
Complementary Cones in
Euclidean Jordan Algebras
Madhur Malik
Indian Statistical Institute.
New Delhi-110016, India.
E-mail: madhur@isid.ac.in
Abstract
Consider a Euclidean Jordan algebra V and a
(symmetric) cone of squares K in V. Given a linear transformation L: V ® V and q Î V the Linear Complementarity
Problem (LCP) over a symmetric cone K is of finding an x Î K such that L(x)Î K and <x, L(x)> = 0.
When V = Rn and K = R+n the above problem
reduces to a usual LCP over R+n. Complementary cones
associated with a matrix M Î Rn´n provides a useful tool in
understanding the geometry of the Linear Complementarity Problem (LCP) over R+n.
In this presentation, we extend the notion of complementary cones in LCP over R+n
to LCP over a symmetric cone in a Euclidean Jordan algebra. We shall show that
unlike complementary cones in context of a LCP over R+n,
complementary cones in a general Euclidean Jordan algebra are not closed and we
provide a sufficient condition for the closedness of complementary cones in
Euclidean Jordan algebra. We shall also define the concept of a nondegenerate
complementary cone in V and connect it with the nondegeneracy of a linear
transformation L on V.
Keywords: Euclidean Jordan algebra, symmetric cone, linear
complementarity problem, complementary cone and nondegenerate transformations.
Complementarity
Problems And Positive Definite Matrices
P. Bhimasankaram, , A. L. N. Murthy , G . S. R. Murthy
Indian Statistical Institute
SQC & OR Unit, Street no. 8, Habshiguda,
Hyderabad 500 007
And
T.Parthasarathy
Department of Mathematics/ Statistics,
University of Hyderabad, Hyderabad 500044
Abstract
The class of
positive definite and positive semidefinite matrices is one of the most
frequently encountered matrix classes both in theory and practice. In
statistics, these
matrices appear
mostly with symmetry. However, in complementarity problems generally symmetry
in not necessarily an accompanying feature.
In the case of linear complementarity problem, problems associated with
positive semidefinite matrices have some interesting properties. For example,
problems of this nature have convex solution sets and can be processed by
Lemke's algorithm and Graves' principal pivoting algorithm. It is known that the principal pivotal
transforms (PPTs) (defined in the context of linear complementarity problem) of
positive semidefinite matrices are all positive semidefinite. In this article we
introduce the concept of generalized PPTs and show that the generalized PPTs of
a positive semidefinite matrix are also positive semidefinite. Unlike the class
of P-matrices (that is, the
matrices with all principle minors positive), positive definite and positive
semidefinite matrices have no characterizations in terms of LCP. Against this
background, our main interest in this article is to present a characterization
of positive
definite matrices in terms of semidefinite linear complementarity problem. Furthermore, we present some simplification
procedure in solving a particular type of semidefinite linear complementarity
problems involving positive definite matrices.
On Some interconnections between strict monotonicity, GUS and
P-properties in SDLCP
G. Ravindran
Abstract
We know that in SDLCP problem on
Sn, strict monotonicity P2
GUS
P.
In this article we show that the
reverse implications hold for any self-adjoint
transformation, and for normal Lyapunov and Stein transformations. By
introducing the concept of a principal
sub-transformation of a linear transformation, we show that L: Sn
→ Sn has the P2 property if and
only if every n×n, real inversible matrix
Q, every principal sub-transformation of has the P-property
where
(X) = QTL(QXQT)Q. Based on this we can
show that P2, GUS and P-properties coincide for the two sided multiplication transformation.
Data
Correcting and The Simple Plant Location Problem
Diptesh Ghosh,
IIM Ahmedabad,
Vastrapur, Ahmedabad 380015, India.
E.mail:
diptesh@iimahd.ernet.in
Abstract
In this talk we describe an
algorithm to obtain solutions to the Simple
Plant Location Problem whose costs are not more than a
pre-specified amount alpha from the
cost of an optimal solution in reasonable, though not polynomial time, and present computational results. We first describe a pseudo-Boolean formulation of the
Simple Plant Location Problem, and then
go on to discuss new pre-processing rules that allow us to considerably reduce
the size of problem instances. We introduce data correcting algorithms that
make use of polynomially solvable special cases of combinatorial problems to
obtain good quality solutions to
general problem instances by changing the data in the original instance. We
show how a data correcting algorithm can be formulated for the simple plant location
problem and present computational results.
A
classification scheme for inventory routing problems
N. R. Achuthan
Curtin University of Technology
GPO Box 1987, Perth, Australia 6845
E. Mail Address: archi@maths.curtin.edu.au
Abstract
The management of distribution of commodities from a
central store
(or a production unit) to several
customers (that is retailers) includes resolving the traditional Inventory Problem and the Vehicle Routing
Problem. Traditionally, the Inventory Problem for each customer was first
solved and then for each period the Vehicle Routing Problem was solved
involving the warehouse and the relevant customers that need to be delivered
during that period. But in the recent past, many researchers viewed these two
problems in an integrated manner as an Inventory Routing Problem (IRP). In
general, such Inventory Routing Problems may arise from a variety of situations
imposing several types of assumptions on the cost structures and feasible
solution strategies. In this paper we develop a classification scheme and
the relevant notation to describe
the various possible scenarios of IRP. Furthermore, using our classification
scheme we briefly review the literature.
Dominance
Based Approach for Local
Prabha. Sharma,
Deptt. of Mathematics,
I.I.T., Kanpur 208 016, INDIA
Abstract
A combinatorial
optimization problem (c, F), is to minimize an objective
function c, with respect to a set of feasible solutions F.
For the combinatorial optimization problems known to be NP-hard the interest is
in obtaining locally optimal solutions or approximate solutions with as little
effort as possible. A locally optimal solution
x* F is
the best solution with respect to a neighborhood
N(x*) F of x*
. Thus a local search algorithm to search for a locally optimal solution
requires a neighborhood structure and a pivot rule to be used for searching for
a locally optimal solution with respect to the neighborhood function. The
`standard' local search algorithm begins the search from an initial x0
F and then moves
iteratively from a solution to a better neighboring solution, as long as there
is one, until it finally terminates at a locally optimal solution x*.
Single Sampling LTPD Plans as a
Two-Stage Stochastic Programming Problem
B. Pradhan
SQC & OR (T&P) Unit,
Indian Statistical
Institute,
203 B. T. Road,
Kolkata – 700108
bis@isical.ac.in
And
T. K. Chakraborty
N-366, Srijita Appartment,
Baishnabghata, Patuli,
Kolkata - 700094
Abstract
In a
single sampling attribute plan (SSP) the decision variables are sample size n
and the acceptance number c. In
designing a SSP for acceptance inspection, it is assumed that the producer know
her process average p1 under normal conditions and that she
occasionally produces lots of bad
quality. She may then select lot
tolerance percent defective(LTPD), p2
say, p2 > p1 and a risk P(p2 ) = b of accepting the lots of this quality where P(p) is the
operating characteristic of the SSP. The Dodge-Romig LTPD SSP with total inspection of rejected lots is to find the
sample size n and the acceptance number c which
minimize I (p1 , n, c) = n + (N – n) ( 1 – P(p1 ))
subject to
P(p2 , n, c) = b
and n , c ³ 0 , integer.
It is
clear that the above optimization problem is a non linear integer programming
problem (NLIP). In such problems it is assumed that p1 and p2 to
be constant. However, in practical applications, p1 and p2 can
not be known exactly. Here we
considered the case when p1 and p2 are assumed to be random variables. Thus the
Dodge-Romig problem becomes a
stochastic programming problem.
In
this work, we use two stage stochastic programming techniques to solve the
above optimization problem when p1 and p2 having Normal
and Beta distributions under E-model and E-V model. The effect of
variability of the parameters on the
solution of the two–stage programming formulation has also been studied.
Bounds on Probability
of a finite union
V. Kumaran
National Institute of
Technology
Tiruchirappalli – 620 015,
INDIA
Andras Prekopa
Rutgers Center for Operations Research,
Rutgers University, NJ
Abstract
Given the first m binomial
moments of the random variable indicating the number
of those events among A
, A
, …, A
which occur (i.e., S
= E
for k = 1, 2 , …, m),
computing the lower and upper bounds of the probability of
is an important problem which has been discussed by various
authors for m =2,3,4 & 5 , n ³.m. In this paper, sharp bounds for the probability of
given S
, S
, …, S
(i.e., m = n–1, n ³ 3) have been obtained by solving the corresponding linear
programming problem. Results are compared with the known results.
Exploring Impatience in a Single Channel Queuing
System
Amit
Choudhury Arun C Borthakur
Department
of Statistics, Gauhati University, Guwahati –781014 , India.
E
mail : achoudhury@sancharnet.in
Abstract
This paper analyses of a single
server markovian queuing system with the added
complexity of customers who are
prone to giving up whenever its waiting time is larger than a random threshold-
his patience time. We assume that these individual patience times are
independent and identically distributed exponential random variables.
Applications of such systems can be found in hospital emergency rooms handling
critical patients, inventory systems with storage of perishable goods, call
centres and the like. We consider two approaches to modelling impatience viz reneging
till beginning of service and reneging till end of service. Both these
approaches find wide application. Under this framework, a detailed but lucid
derivation of the distribution of virtual waiting time as well as that of
waiting time in queue is presented. These two waiting times are derived for
both the impatience modelling approaches separately. In all, four waiting times
are presented to account for each of the different scenarios. Additionally,
such reneging systems also have some interesting performance measures such as
reneging rate, fraction of customers who renege, mean waiting times.
Expressions for the same are presented.
Multichannel
queue with variable arrival rate
K. Udayachandrika S. Muthulakshmi
Readers, Department of Mathematics
Avinashilingam
Deemed University
Coimbatore
Tamil Nadu
Abstract
Queueing system with C-channels and room capacity N is considered
. The system starts with the arrival rate Û0. Whenever the
number of customers in the system increases to a pre-assigned number R1(
> C) the arrival rate changes from Û0 to Û1 and continues
with that rate as long as the system size is less than a pre-assigned number R2
( > R1).When the size reaches R2, the arrival rate
changes from Û1 to Û2. On the other
hand if it becomes less than R1 the arrival rate changes back to Û0 . The same
process is repeated. If the system size lies between Rm and N then
the arrival rate is Ûm . The transient
and the steady state probabilities are derived.