Promoting Ethics in OR Practice

Marc Le Menestrel

Department of Economics and Business

University Pompeu Fabra

Ramon Trias Fargas, 25-27

08005 Barcelona, Spain

E. Mail: Marc.Lemenestrel@upf.edu

 

Abstract

In 2003, a group of OR scholars convened at INSEAD to study the relevance and importance of ethics in Operations Research. The workshop was intended at promoting ethics in the field of OR at the theoretical, practical and institutional levels. Besides presenting our initiative and its rationale, this talk focuses on the practical dimension of ethics and Operations Research. We start by arguing that most of the ethical issues that we confront in OR can be resolved by following tenets of good professional practice. In this sense, the question of ethics in modelling can be viewed as a question of quality control. We present principles to judge such quality during the different phases of the OR analysis. Second, we argue that although ethical practice of OR necessitates good practice, it is not sufficient and essential limitations remain. We present and illustrate the ethical limitations of OR good practice, in an attempt to clarify the ethical dilemmas they may entail for the OR analyst. This leads us to propose a synthesis.

While the practice of OR consists mainly in finding solutions to solve well-defined problems, we propose to put a direct emphasis on the role of the OR practitioner in raising questions to uncover ill-defined problems. Indeed, the analytical process of OR tends to reduce the problem to a structured and solvable form from which ethical concerns may have to be abstracted. As a main actor of this process, the OR practitioner is often well aware of what is missing in his analysis. The idea is to sketch a methodological approach that would help the OR practitioner to contribute to the uncovering of the hidden and value-loaded issues, thus participating to a more exhaustive and ethical process.

 

Usage of the quantitative techniques in banking sector

R. B. Barman

Reserve Bank of India, Bandra-Kurla Complex,

Mumbai 400051, India.

 

Abstract

In this talk the author gives an overview of the use of operations research and other quantitative techniques in the banking sector.  The banking sector needs to take a number of investment decisions, while meeting the constraints of the service expected of a bank.  A central bank like the Reserve Bank of India has to provide guidelines for investments and also manage its foreign exchange resources, subject to the constraints imposed by the service expected of such banks.  In deciding the policies, quantitative techniques are quite useful.  The author presents his experience on the use of such techniques from the point of view of a banker’s bank.

 

Efficiency of Indian Banks During Post-Reform Period: A DEA Approach

Abhiman Das

Reserve Bank of India, Bandra-Kurla Complex,

Mumbai 400051, India.

Email: adas@rbi.org.in.

 

Abstract

The purpose of this paper is to investigate the performance of Indian commercial banking sector during the post reform period 1992-02. We evaluate several efficiency estimates of individual banks using the nonparametric frontier methodology - Data Envelopment Analysis (DEA). Two models based on two different approaches have been constructed to show how efficiency scores vary with change in inputs and outputs. This analysis also aims to explain the variation in calculated efficiencies to a set of variables, i.e. bank size, ownership, capital adequacy ratio, risk-exposure and management quality. We find that medium sized public sector banks performed reasonably well and are more likely to operate at higher levels of technical efficiency. We find a close relationship between efficiency and soundness as determined by bank’s capital adequacy ratio. The empirical results also show that the technically more efficient banks are those that have, on an average, less non-performing loans.

 

JEL classification: D61; G21; G34

Keywords: Bank efficiency; DEA analysis, Indian banks

 

Fatal flaw of free-market system: lag residual examination

Diganta Saikia     Chitro Majumdar

M-20, GK-II, New Delhi-48

India

E. mail: s_diganta@yahoo.com, chitromajumdar@yahoo.com

 

Abstract

One of the most potent reasons for expecting economic growth to remain sluggish is that it suits the policy objectives of the two leading COUNTRIES . The constellation that seems to be emerging as a sequence of the imperial circle deserves to be described as a golden age of capitalism.

  The fatal flaw of a free-market system is its inherent instability. The belief that financial markets are self-regulating is simply false. Fortunately, secretary Becker is aware of this fact and the administration has begun to exert active economic leadership since he moved to the treasury. We can grasp this anthology from the works of Wickshell. Certainly, it is not laissez faire that has brought us to the threshold of a new golden age of capitalism. But a concerted economic policy designed to counteract the excesses of a free market system. Experience from Schumpeter and Bohm-Bawerk (1891) and Fisher (1907), an eulogistic review introduced what was at once a profound conception of the nature of advance production and at the same time an over-simplification which made it slightly misleading where the rate of interest in organised money markets has tended to be positive. And its much better stated, and in a much more stimulating way, by K Wickshell in the first volume of his Principles (1934-35). The heteeroskedasticity problem is often solved by estimating the historical regression hypothesis in a log linear form. In this paper, we’d like to say that the continuous residual functions can be applied in an economy in the free market,  post-globalisation economy. When we regress log y on log x, the proposed estimated equation is :

Log y  =  0.0757 + 0.9562log x      

             (0.0574)    (0.0183)

R2=0.9935   RSS=0.03757

 

Stochastic Investment Modelling: A multiple time-series approach

Chitro Majumdar

M-20, GK-II, New Delhi-48

E. mail:  chitromajumdar@yahoo.com

 

Abstract

In this paper we take up the multiple time-series modelling approach suggested by Tiao & Box (1981) to construct a stochastic investment model for price inflation, share dividends, share dividend yields and long-term interest rates in India. An approach for dealing with aberrant observations is to perform a time-series outlier analysis. This method has the advantage of being direct and transparent. The sequential and iterative steps of tentative specification, estimation and diagnostic checking parallel those of the well-known Box-Jenkins method in the univariate time-series analysis. This research might be useful for actuarial applications for which extreme stochastic fluctuations are important. An alternative approach to dealing with extreme observations in the data. It is not required to specify any a priori causality as compared to some other stochastic asset models in the literature.

 

Keywords: ARCH Models; Non-linear Threshold Models; Wilkie Model; Vector Autoregressive-Moving Average Models; Time-Series Outliers; Price Inflation; Long-Term Interest Rates; Share Dividend Yields; Share Dividends

Pedigree Polytope is a Combinatorial Polytope

Tiru Arthanari

  Department of MSIS, University of Auckland 

7, Symonds Street, Auckland, New Zealand.

E.mail: t.arthanari@auckland.ac.nz

 

Abstract

Naddef and Pulleyblank(1981) define a  0-1 polytope  as combinatorial   if  the midpoint of the line joining any pair of nonadjacent     vertices  is the midpoint of the line joining another pair of vertices. They show that some well-known classes of polyhedra, such as    matching polyhedra, permutation polyhedra are  combinatorial polyhedra. Arthanari (2002) defined and studied a combinatorial object called     a  pedigree. Pedigrees are in one-to-one correspondence with the Hamiltonian cycles in a complete graph on n vertices. In this paper we      show that the pedigree polytope is combinatorial. As a consequence of this we have the strong adjacency property for Pedigree   polytopes., that is, the best valued pedigree is adjacent to some second best valued pedigree for any weight vector.

 

 

Integral version of Hahn Banach Theorem for Combinatorial

Optimization

H.Narayanan

Indian Institute of Technology Bombay

Powai, Mumbai 400076

 

Abstract

A set function   f : 2S   , is said to be  polyhedrally tight (pt) (dually polyhedrally tight (dpt))  iff in the set polyhedron (dual set polyhedron) denoted by Pf (Pf)

defined by  

x(X)    f(X)  XS

x(X)  f(X)  XS

 

every inequality can be satisfied as an equality (not necessarily simultaneously).   

  The Hahn - Banach Separation Theorem states that if   f  is a convex function, g is a concave function which take zero value on the zero vector and   f  g then there exists a linear function h  s.t. f h g. In general, if f, g take integral values on integral points then it is false that an h can be found such that  it  takes integral value on integral points. However such a result is true in the case of  submodular set functions ( i.e., f(X)+f(Y)  f(XY)+f(XY), X, Y   S, f  being defined on subsets of   S) and supermodular set functions ( i.e., f(X)+f(Y) f(XY)+f(XY), X, Y   S, f   being defined on subsets of S).

Indeed, we have the `integral sandwich theorem for submodular functions' which states 'if f,g

are integral submodular and supermodular set functions respectively then h can be taken to be an integral modular function.' In this talk we give sufficient conditions for such a result to be true in the case of the general class of  polyhedrally tight and dually polyhedrally tight set functions. Using these we derive the (integral) Sandwich Theorem for

submodular/supermodular functions and (working with a (0,1,-1) coefficient matrix generalization of set polyhedra), the 1/2-integral Sandwich Theorem for pseudopolymatroids.      

 

 

The Alternating Cone of a 2-Colored Graph

Murali K. srinivasan

Indian Institute of Technology Bombay

Powai, Mumbai 400076

 

Abstract

Suppose the edges of a graph G are colored red and blue. Assign  a nonnegative real weight to every edge such that, at every vertex, the  sum of the weights of the red edges incident at that vertex is equal to the sum of the weights of the blue egdes incident at that vertex. All such nonnegative real weights define a convex polyhedral cone in the edge space, called the Alternating Cone of the 2-colored graph G. We study basic properties of this cone: extreme rays, integral vectors,  dimension, and flow theory. (This is joint work with Uri Peled and Amitava Bhattacharya)

 

 

A 3-Neighborhood Heuristic Algorithm For Constrained Redundancy Optimization In Complex Systems

         Manju Agarwal  and  Sudhanshu Aggarwal                   

Department of  Operational Research

University of Delhi

Delhi-110007.

Abstract

 In the last four decades several heuristic algorithms have been developed for solving constrained redundancy reliability optimization problems. Almost all the algorithms search for the optimal solution iteratively, remaining within the feasible region, in which a redundancy is added to only one subsystem in each iteration; and the selection of the subsystem is based on a sensitivity factor. Thus the solutions obtained are optimal only in 1-neighborhood. It was in 1982 that Kohda & Inoue (KI) presented a heuristic algorithm in which two feasible solutions obtained in successive iterations are in 2-neighborhoods of each other i.e a redundancy is added in one subsystem and a redundancy is subtracted from another. Shi (1987) developed a heuristic algorithm based on minimal path sets in which the feasible solutions obtained in successive iterations are in 1-neighborhood or 2-neighborhoods of each other.  Perhaps the most interesting and efficient heuristic algorithm in terms of solution quality is that given by Kim &Yum (KYA, 1993) in which the search is made not only in the feasible region but also in the infeasible region. In this algorithm two feasible solutions obtained in successive iterations need not even be in 2-neighborhoods of each other.

This paper presents a new criterion to improve redundancy optimization heuristically in 3-neighborhood. In the proposed algorithm excursion into infeasible region   is made only once and the search continues to explore the feasible region by performing a series of selection and exchange operations by subtracting (adding) a redundancy from (to) one subsystem and adding (subtracting) a redundancy to (from) one or two subsystems. With some examples, the effect of the proposed algorithm and existing algorithms (KI, Shi, KYA) are compared. Computational results show that our algorithm can be used as an attractive alternative in terms of solution quality.

 

Quadratic Program with Homogeneous Constraints

Archana Khurana

Department of Mathematics

 University of Delhi, Delhi-110007

E-mail: archana2106@rediffmail.com, archana2106@yahoo.com

 

Abstract

In this paper an algorithm to solve a quadratic programming problem when some of its constraints are homogenous is developed. Using the homogenous constraints a transformation matrix T is constructed. With the help of this matrix T the given problem is transformed into another quadratic program but with fewer constraints. A relationship between the original problem and the transformed problem is also established which ensures that the solution

of the original problem can be obtained from the transformed problem. The algorithm is illustrated with the help of a numerical example.

 

The use of  Genetic algorithm for solving Multi-objective bi-level programming problems through fuzzy goal programming

Bijay Baran Pal, Rabin Kumar Jana and Surapati Pramanik

Department of Mathematics,

University of Kalyani,

Kalyani- 741235, INDIA

 

Abstract

            In MOBLPP, each of the two decision makers (DMs) locating at two different hierarchical levels (upper- and lower-levels) independently controls a separate set of decision variables and is interested in optimizing a set of objectives of his /her own. It may be noted that in fuzzy programming (FP) formulation of the problem with multiplicity of objectives at each level, each DM is always faced with computational difficulty of assigning an imprecise aspiration level to each of the objectives due to the conflicting nature among themselves in the decision making context. To overcome such a difficulty, the GA based solution approach is used here to reach the individual decision for each level with the setting of aspiration levels to the level-objectives.

In the FGP formulation of the problem, the fuzzy objective goals at each level and the decision vector controlled by the upper-level DM are defined first in terms of their membership functions by introducing certain tolerance limits for achieving the aspired levels of them. Then the membership functions are transformed into membership goals by introducing under - and over – deviational variables and assigning the highest  membership value (unity) to each of them.  In the solution process, GA solution technique is introduced into the framework of FGP model to reach a satisfactory decision for both the DMs and thereby optimizing the overall benefit of the organization. The proposed solution approach is illustrated by a numerical example and compared with the solution of the conventional FGP approach.

 

A Comparative study of  optimum design of a cone clutch using genetic algorithm, geometric programming and graphical method 

G.Balavenkata Reddy        M.Mohan Reddy      C.S.Rao

KSRM college of Engineering, Kadapa-516 003 (AP)

E.mail: reddy_mohan@rediffmail.com

Abstract

      In the present study, optimum volume design of a cone clutch, a widely used power transmission element in machine drives automobiles etc has been investigated using Genetic Algorithm. The ergodicity of evolution operators makes genetic algorithm very effective performing global search compared to traditional optimization techniques. The numerical results obtained agree extremely well with the solution of same problem solved using Geometric programming which is a traditional non-linear programming technique and also graphical method.

 

Key words: Cone Clutch, Genetic Algorithm, Geometric Programming

 

 

The Minimization of  waste through optimizing the panel cutting strategy

      U H Acharya                 Boby John

Indian Statistical Institute

                                      Bangalore Centre

E mail: uhacharya@hotmail.com

bobymon@hotmail.com

 

Abstract

The supplier of panels like Rear Door, Side Screen, Roof Plate etc. to M/s Asia Brown Bovery, Bangalore was adopting the trial and error / adhoc methods for cutting the large 2 mm sheets into sheets of smaller dimensions. The sheets were available in standard size i.e. 2500 mm x 1250 mm only. Compounded to this was the fact that the different panels required different cut part sizes and all these panels are required in the same final product. So naturally the company, due to the absence of a scientific approach, was incurring large material waste as well as more time on finalizing the panel cutting procedure. Every purchase order used to be a battle for the purchaser. The authors developed a cutting plan (2 Dimensional), which minimized the waste as well as the initial planning time. 

 

An integer-programming model for deployment of highway incident

Response Vehicles

                Raktim Pal                                                Indranil Bose            

           i2 Technologies, USA               University of Hong Kong, Hong Kong

         raktim_pal@i2.com                                bose@business.hku.hk

 

Abstract

Congestion caused by highway incidents is a major concern for road users in urban areas. In order to minimize the adverse effect of incidents, appropriate response strategies are required which will clear incidents as quickly as possible. An important component of these strategies is the deployment of incident response vehicles. These incident response vehicles patrol a highway corridor in order to clear incidents as soon as they occur and keep the traffic moving. Incident response vehicles may be assigned to depots located in different parts of an urban area. Upon detection of an incident at the traffic operation center, the response vehicles may be dispatched to the incident site from a vantage depot location. In this study a reliability-based mixed integer programming model has been developed. The model is used to find the best location of depots and assignment of incident response vehicles to each of these depots so that incidents at the demand nodes can be attended from the feasible depot locations at a minimum cost. The proposed model overcomes the limitations in the existing current literature on incident response system by considering all types of major costs involved in operating an incident response system like the fixed and variable costs of operation for both response vehicles and depots. Again, this is the first time a distinction between a major and a minor incident is made to accommodate prioritized service provided by an incident response service. In addition, the reliability of the response service is also modeled. Using a hypothetical problem, the model is used to find the optimal location of depots at different probable sites and to assign incident response vehicles to each depot. The response areas of each vehicle are also determined. Further, the model gives guidelines regarding which type of vehicle should cover what percentage of a particular type of incident in a given zone. Detailed sensitivity analysis is performed to illustrate the impact of various parameters on the design of the incident response system.

 

Metaheuristic approach for solving the manufacturing problem

1G. Kannan, 2P. Nagaraj, 3S. Jayabal, 4J. Varadarajan, 4M. Ramya:

1,4Department of Mechanical Engineering

Sethu Institute of Technology, Kariapatti-626106, Tamilnadu

2Department of Mechanical Engineering

Mepco Schlenk Engineering College, Sivakasi-626005, Tamilnadu

3Department of Mechanical Engineering

A.C College of Engineering & Technology, Karaikudi-623004

E.mail:  govindkannan_2001@rediffmail.com

 

Abstract

Planning is a must for the success of any industry in the continuously changing environment. Management must work for the intermediate range plant called as aggregate plan. Consistently with long range policies and resources allocated by a long-range decision. Aggregate production plan (APP) for any industry is done from 6 to 18 months usually. A mixed strategy comprising of inventory, backorder, sub-contracting, overtime, firing and hiring is formulated to solve the aggregate production-planning problem. To attain an effective result of such plan various essential factors like minimizing production planning cost, minimizing cost of manufacturing viz production cost, sub contracting cost, overtime cost, hiring and firing cost, and inventory cost, maximizing profit and maximizing the service to customers are achieved. In this paper an attempt has been  for solving the manufacturing problem through the Metaheuristic approach(i.e Simulated Annealing and Genetic Algorithm)  in which the  mixed strategy is formulated to solve the aggregate production plan.It is evident that the aggregate plan generated using SA is more cost effective than the plan generated using conventional technique(LPP).

 

Optimization of “Coal Blending” in a Steel Plant

using Non-linear Programming

S.K.Dutta        J.Roy

Durgapur Steel Plant

Durgapur: 712 203

 

Abstract

Metallurgical Grade Coke is the basic necessity for Blast Furnace Iron Making, in an Integrated Steel Plant.  Quality and Cost of Hot Metal (Pig Iron) produced depends upon the quality of input coke in general and coke Ash content in particular. This coke is produced at Coke Ovens from input coal by carbonization process.

 

Ash content of Indian Coal is very high, which is detrimental for Iron making with regards to quality and cost.  For this reason Indigenous coal with high Ash content is suitably blended with Imported Coal containing low Ash percentage.  As per prevailing practice blended input coal’s Ash content should be around 18% and moreover there should be a consistency w.r.t. this parameter.

 

The Ash content of Indigenous coal from different sources highly fluctuates between 35% to  18%.  Whereas Ash content of Imported coal is more or less stable around 9%.  To satisfy the above-mentioned Technical Norms using a complex process of coal preparation in the Coal Handling Plant (CHP) in terms of its Lay-out, physical facilities and associated host of Technical Constraints, a Non-linear Programming Model is developed.  The model provides and shift-wise Operational Plan to minimize the Ash content of the blend coal and variation from shift to shift in terms Ash content.

 

On an Inequality  related to variational inequality and its consequences

G. K. Panda

Department of Mathematics,

National Institute of Technology, Rourkrla – 769 008, Orissa, India

B. B. Sahoo

Department of Mathematics, Cuttack College, Cuttack – 753 004, Orissa, India

and

G. K. Sahu

Department of Mathematics, Seemanta Engineering College, Jharpokharia

(Mayur Bhanj) - 757 086,  Orissa, India

 

Abstract

Let X be a Hausdorff topological vector space with dual X*, K a nonempty subset of X, and f : K´K ® be any map.  In this paper we study the following problem: “Find  such that  for all y Î K”. Some results on this problem was studied by Takahasi, Park and Kim and Yen. Behera and Panda recognized this problem as an independent problem and proved several results on the existence of solution of this problem in the setting of a Banach space. This problem includes as special cases, many problems on variational inequalities and generalized variational inequalities studied by many authors. In this paper we also introduce and study another problem, “Find  such that  for all where S : K ® 2K  is any point-to-set map. This problem includes as a particular case the classical quasi-variational inequality problem. A generalization of Minty’s Lemma is also studied.

 

A Variational Inequality in  Semi-inner-Product Space

Sudarsan Nanda

Department of Mathematics

Indian Institute of Technology

Kharagpur, India

 

Abstract

The purpose of this paper is to give a brief review of known results on Variational Inequality and to prove an existence theorem of Variational Inequality in Certain Semi-inner-product Space under some Contractive Type Conditions on the Operator.

 

Infact we prove the following theorem :

 

Theorem:  Let   be uniform convex and strongly smooth (so that   is a unique semi-inner-product space ) and    is a closed convex subset of  .    Let   satisfy any one of the following conditions:

 

with 

  with 

 

Then there exist a    such that    for all  .

 

On the Lipschitz continuity of the solution map in Semidefinite Linear Complementarity Problems

R. Balaji     V. Vetrivel

Indian Institute of Technology Madras

Chennai

Abstract

In the setting of Semidefinite Linear Complementarity Problems, we consider the problem of Lipschitz continuity of its solution map. It is shown that if the solution

map is Lipschitz continuous, then the i-th diagonal entry of L(Eii) is positive where Eii is the diagonal matrix with the i-th diagonal entry 1 and rest are all 0. As an application, we construct linear maps having P property, for which the solution map is not Lipschitz continuous.

 

Improving Efficiency of Winding in an Autoconer

A.Rajagopal

    Indian Statistical Institute

   Coimbatore

 

Abstract

Garment manufacturers judge the quality of yarn by good finish which means, less joints and higher productivity – the common benefits to both spinners and garment manufacturers.  Higher the number of joints, more is the wasteful time and poor is the quality and productivity.  The cost is higher for operators if work assignment is reduced in order to reduce waiting time.  On the other hand if work  assignments is increased due to increased waiting time then the productivity is lower. Therefore one needs to  assign work optimally to workers and improve the finish quality of yarn.

In this paper we analyze and find that an unit increase in operator cost by reducing work assignment shall not contribute to higher productivity, but only escalates cost.  The increase in productivity is only 190 Grams per frame per shift, but the improvement in yarn quality by reducing 10% of joints is 17 Kg  per frame per shift.

Further, the process parameters when altered to achieve this results in  actual improvement of 10 Kg per frame per shift.

 

Techno –Economic Viability of Sick Unit

By Restructuring “NPA” and Rehabilitating

Through Breakthrough Approach

A.Rajagopal

SQC & OR Unit

Indian Statistical Institute

Coimbatore

Abstract

Textile sector in India, is economically socially, technically and traditionally contributing to foreign exchange earnings, embracing agriculture, industrial and trading sector in India.

Many spinning units, were protected by sellers’ Market until a decade ago. However under the present condition of a buyer’s market “technological up-gradation” and quality consistency are vital factors forcing a change.

 

In this paper we consider the case of a sick unit and describe how quantitative techniques were used to evolve a turn around strategy for reconstruction. As a result of this effort the sick unit is brought back to viability.

 

Performance Analysis of Cellular Networks

S. Dharmaraja

Department of Mathematics

Indian Institute of Technology Delhi

New Delhi 110016, India

Email: ~dharmar@maths.iitd.ac.in

 

Abstract

The rapid growth in the demand for mobile communications has led to an  intense research effort to achieve an efficient use of the scarce spectrum allocated for cellular communications. Two important Quality of Service (QoS) measures have been defined in cellular networks. The first one is the new call blocking probability, a measure similar to the one that we see in telephone trunk systems. The second one, is the handoff call dropping probability. Handoff dropping is extremely important in cellular systems since it leads to the undesired phenomenon of a forced call termination. In this paper, the recent work on closed form solutions to the above two performance measures in cellular networks is reported.

 

Keywords: Call blocking probability, Quality of Service,  Queuing models, Performance measures, Cellular networks

 

Computationally Efficient Winner Determination Techniques for

Internet Multi-Unit Auctions

Sandeep Juneja,

Tata Institute of Fundamental Research

Dr. Homi Bhabha Rd
Mumbai 400 005

Abstract

In this talk we develop algorithms for efficiently processing bids  in a single item, N  unit (N  ³ 1) open cry auction for a large class of winner determination rules. Existing techniques consider all previously submitted bids along with the new arrivals to update the current set of winners. We propose that at most N  "potential winner bids" amongst the previously submitted bids need to be used for such updates, thus significantly reducing  the computation time and memory requirements. This is crucial when a large number of auctions are being conducted simultaneously. For a commonly used greedy auction rule we show that the expected number of potential winner bids may be much less than N under reasonable probabilistic assumptions. (Joint work with Achal Bassamboo and Manish Gupta)

 

 

 

 

 

 

 

Iterates of semidifferentiable monotone homogeneous maps and Markov decision problems

Marianne Akian

INRIA, Domaine de Voluceau, 78153 Le Chesnay Cedex, France,

 e--mail : marianne.akian@inria.fr   

S. Gaubert

 INRIA, Domaine de Voluceau, 78153 Le Chesnay Cedex, France,

e.mail: stephane.gaubert@inria.fr

Roger Nussbaum

Mathematics Department, Hill Center,

Rutgers University, 110 Frelinghuysen Road,

Piscataway, New Jersey, 08854-8019, U.S.A.,

e--mail : nussbaum@math.rutgers.edu

 

Abstract

In [AGN03], we consider the nonlinear eigenproblem for maps satisfying certain conditions of monotonicity and homogeneity. We prove in particular the following result. If  C  is a normal cone in a Banach space,  and if f   is a monotone multiplicatively homogeneous self-map of the interior of C, then, an eigenvector v of f is unique (up to a multiplicative constant)  provided that  f is semidifferentiable at v, that its semiderivative fv has a unique eigenvector (up to the addition of a multiple of v) and that  fv satisfies a Fredholm type condition.  Moreover, every orbit of f converges to a scalar multiple of v at a geometric rate, provided that  fv  has a certain nonlinear spectral radius strictly less than one. This extends earlier results proved by Nussbaum for differentiable maps [Nus88]. The nonlinear spectral radius that we use originates from the theory developed in [MPN02]. The semidifferentiability notion comes from variational analysis (see [RW98]).

 

We shall present here the applications of such results to optimal control problems, and more generally, to zero-sum repeated games. In this context, f is conjugate to a dynamic programming operator, and it is semidifferentiable under  rather general assumptions, which include the case where the state  and action spaces are finite.  We obtain as corollaries uniqueness results for the bias vector, as well as geometric convergence results for the value function, which extend and complete the results of [AG03] for ergodic optimal control problems.

 

References: [AG03] M.Akian and S. Gaubert: Spectral theorem for convex monotone homogeneous maps, and ergodic control, Nonlinear Analysis. Theory, Methods & Applications, 52, 2 (2003), 637--679.

[AGN03] M. Akian, S. Gaubert, and R. Nussbaum, Uniqueness of fixed point of nonexpansive semidifferentiable maps, In preparation, 2003.

[MPN02] J. Mallet-Paret and Roger Nussbaum, Eigenvalues for a class of homogeneous cone maps arising from max-plus operators, Discrete and Continuous Dynamical Systems 8 (2002), no.3, 519--562.

[Nus88] R.D. Nussbaum, Hilbert's projective metric and iterated nonlinear maps, Memoirs of the AMS 75 (1988), no.391.

[RW98] R.T. Rockafellar and R.J-B. Wets, Variational analysis,   Springer, 1998.

 

 

 

 

 

 

Fast and Robust Learning Through Fuzzy Linear

Proximal Support Vector Machines

Jayadeva, Reshma Khemchandani and Suresh Chandra

Department of  Electrical Engineerging,  Department of Mathematics

Indian Institute of Technology, Hauz Khas, New Delhi – 110 016,  India

 

Abstract

Traditional Support Vector Machines assign data points to one of two classes, represented in the pattern space by two disjoint half-spaces.  In this paper, we propose a fuzzy extension to proximal SVMs, where a fuzzy membership is assigned to each pattern, and points are classified by assigning them to the nearest of two parallel planes that are kept as distant from each other as possible.  The algorithm is simple and fast, and can be used to obtain an improved classification when one has an estimate of the fuzziness of samples in either class.

 

Multiparametric Sensitivity Analysis of the

Constraint Matrix in Linear-plus-Linear Fractional Programming

Problem

Sanjeet Singh, Scientist, SAG, DRDO, Metcalf House, Delhi-54

Pankaj Gupta, Deen Dayal Upadhyaya College, University of Delhi,New Delhi-15

 Davinder Bhatia, Dept. of OR, University of Delhi, Delhi-7

 

Abstract

In this paper, we study multiparametric sensitivity analysis for perturbations in a single row or a column of the constraint matrix in programming problems with linear-plus-linear fractional objective function. Such programming problems arise when a compromise between absolute and relative terms is to be maximized. This type of problem has major applications in areas like problems of optimizing enterprise capital, the production development fund, transportation to name a few. We construct critical regions for simultaneous and independent perturabtions using the concept of maximum volume in the tolerance region. Necessary and sufficient conditions are given to classify the perturbation parameters as "focal" and "nonfocal". Nonfocal parameters can have unlimited variations, because of their low sensitivity in practice these parameters can be deleted from the final analysis. For focal parameters or more sensitive parameters, a maximum volume tolerance region is characterized. Theoretical results of the paper are illustrated with the help of a numerical example.

 

Duality in Multiobjective fractional programming involving ρ-convexity

Reddy, P.R.S. and Varalakshmi, G.

University of Sri Venkateswara, Tirupati, A.P., India

E. mail: gvlsvu@yahoo.co.in

Abstract

 In this paper, we derive necessary and sufficient conditions  for a multiobjective fractional programming with respect to ρ- generalized convexity. In addition, weak and strong duality results are proved in this context.

Key Words: Multiobjective fractional programming problem, ρ-generalized convexity, duality.

 

 

 

 

 

Multi-Machine Scheduling Problem for A General Class of Job Value Deterioration Function

Sumit Raut1                       Sanjeev Swami2

1PhD Scholar, Department of Industrial and Management Engineering

Indian Institute of Technology, Kanpur- 208016, INDIA

E-mail: rsumit@iitk.ac.in, Phone No: 0512-2597101

2Assitant Professor, Department of Industrial and Management Engineering

Indian Institute of Technology, Kanpur- 208016, INDIA

E-mail: sswami@iitk.ac.in, Phone No: 0512-2597460, Fax: 0512-2597553

Abstract

 

In this paper, we study a class of machine scheduling problems in which the critical scheduling parameter, job value, is dependent on the starting time of the job in a schedule. The objective is to maximize the cumulative value of the jobs in a schedule. Voutsinas and Pappis (2002) have earlier considered the case of job value deteriorating exponentially over time for a single machine problem. In the proposed work, we consider more generalized job deterioration functions (e.g., linear, exponential, Weibull) for the following cases: (i) single machine, (ii) identical parallel machines and (iii) parallel machines with unequal capacity. Complexity boundary results are presented for each model. We also introduce some efficient heuristics for these models and investigate their properties. Numerical results are provided to compare the performance of various approaches proposed.

Key words: Scheduling; Deteriorating jobs; Heuristics; Combinatorial Optimization

 

Machine Shop Scheduling -- An Application

Surajit Pal, S. Mondal, and D. K. Manna

Indian Statistical Institute,

110 Nelson Manickam Road,

Aminjikarai, Chennai – 600029.

Email:  surajitpal@hotmail.com

 

 

Abstract

This article deals with the development of a scheduling system for the machine shop in a company that manufactures automotive equipments. The scheduling problem under consideration involves more than 130 jobs and 50 machines. The scheduling problem has limited planning horizon. The length of planning cycle is normally one week, and the processing requirement varies from cycle to cycle. The system draws schedule after taking into consideration the existing finished/semi-finished stock and availability of machines with the objectives of meeting due-date demand as well as maximizing shop utilization. The development of the system is accomplished with the help of a computer-software that requires about one minute on a PIII processor for deriving weekly schedule.

Keywords: Early due-date principle, make span minimization, shop utilization.

 

 

 

 

 

 

 

On the least cost  testing sequence problem for quality inspection.

K.V.S.Sarma & K.Sekhar

Department of Statistics

S.V University, Tirupati. 517502.

 

Abstract

The Least cost Testing Sequence (LCTS) problem is one application of the well-known Weighted Shortest Processing Time (SPT) sequencing problem.  A manufactured item has to pass each of the n-tests at the final inspection, where ith test has an associated cost Ci of inspection and a non zero probability pi of passing that test. The testing will be terminated if the item fails on any one test.   When there is no constraint on the technical ordering of conducting the tests, it has been established that the optimal sequence that minimizes the expected inspection cost is obtained by

sequencing the tests such that C[i]/(1-p[i]) £ C[i+1]/(1-p[i+1]) for all i.  The new aspect studied in this paper is the provision for rework.  We admit a non-zero probability ri of rework for the ith test and a maximum of m-reworks allowed for the item.   The probability of termination of testing process at the end of the ith test is the sum of probabilities of normal fail, fail after rework and that of the reaching the mth rework.  This probability will become the weight Wi to be given for the ith test.  An

interesting probability structure has been identified to derive Wi under the condition that all the probabilities are non identical. An array called Probability Position Matrix (PPM) has been developed and a spreadsheet has been designed to workout the Wi and the sequence.  This model will have applications to project evaluation studies, final testing in satellite launching cases etc.