Student paper – operation management – calculations

Assignment 1

Q1 (a)

Based on the data from Maryland Hospital, we can get the following chart and R chart.

Table 1.0 Table 2.0 Patient waiting time data in Maryland Hospital (R and chart)

Waiting times ( min)

Sample( K)

1

2

3

4

5

R

1

27

18

20

23

19

21.40

9.00

2

22

25

31

40

17

27.00

23.00

3

16

15

22

19

23

19.00

8.00

4

35

27

16

20

24

24.40

19.00

5

21

33

45

12

22

26.60

33.00

6

17

15

22

20

30

20.80

15.00

7

25

21

26

33

19

24.80

14.00

8

15

38

23

25

31

26.40

23.00

9

31

26

24

35

32

29.60

11.00

10

28

23

29

20

27

25.40

9.00

245.40 164.00

And based on the above data, we can get the following information

k= 10

= / K = 245.40 / 10 = 24.54 min

=Σ R/ k = 164.00 / 10 = 16.4 min



Hence we can get the following:

U C L = + A2

= 24.54 + 0.58×16.4

= 34.052

LC L= A2

= 24.54 – 0.58×16.4

=15.028

Both the upper control limit and low control limit of m Maryland Hospital, calculated by X chart, there are none sample means falls outside either the upper control limit or low control limit that is to say the process of Maryland Hospital is in control, which is shown in the following chart 1.0

Chart 1.0 chart for Maryland Hospital

According to the formula on calculate the value of R from Taylor and Russell (2009), we can get the following data.

=Σ R/ k = 164.00 / 10 = 16.4

And U C L = D 4 R= 2.11 × 16.4 = 34.604

L C L = D 3 R = 0 × 16.4= 0

From the following chart 2.0, we can get that the situation in Maryland Hospital is in control, that none of the sample means is out of control limits, which shows the process in Maryland Hospital is in control.

Chart 2.0 R chart for Maryland Hospital

Range

Sample number

By and large, it seems the process of Maryland Hospital in R chart is in control, and at the same time the consequence in Chart is also in control. Hence, based on the results from both R chart and chart, we may get the conclusion that the process of Maryland Hospital is seemed to be in control.

(b)

As the requirement on the waiting time of the emergency room is limited into 25 minutes + or 5 minutes, and our above analysis has already figured out the range of

And based on the process capability measures, we can get the process capability ratio () and its process capability index () as follows.

= tolerance range / process range

= (upper specification limit lower specification limit) / 6σ

=

=0.53

=minimum [( – lower specification)/ 3σ, (upper specification –)/ 3σ]

= [(24.54 – 20) /3( ), 30-24.54) /3( )]

= minimum [,]

= 1

Based on the results of and, we can get that because is less than 1, the process range of the emergency room is greater than the tolerance range, which indicates it may be impossible for the emergency room to meet the requirement on the current process. And although the equals 1, it may indicate the emergency room may own the probability to meet the requirements. Combining the results of the two, we advise Maryland to adopt measures to manage this situation and to improve the capability of its emergency room to meet the current requirements.

Q 2 (a)

i)

Base on the information from the situation of Quick Bucks Ltd. we can get there are three alternatives for it to pick up a new sound system, whose reliability can be seen clearly in the following chart.

1) Reliability of Basic system

0.8

0.8

0.8

0.8

0.8

 

Reliability = 0.8×0.8×0.8 ×0.8×0.8

= 0.32768

 

2) Reliability of Standard system

0.9

0.9

0.9

0.9

0.9

 

Reliability = 0.9 × 0.9 × 0.9× 0.9× 0.9

= 0.59049

 

3) Reliability of Professional system

0.99

0.99

0.99

0.99

0.99

 

Reliability = 0.99×0.99 ×0.99 ×0.99×0.99

= 0.95099

 

Based on the final results from the above, we can get the professional system owns the highest reliability, which may indicate the professional system may has the longest using time under normal condition, which is recommended Quick Bucks Ltd to choose.

ii) as each system can be purchased in a plus configure, where each components has an identical backup, we can get the new reliability of each system as follows.

Reliability of basic system = [(R1+ (1-R1) ×R2)]5

= [0.8 + (1-0.8) × 0.8)]5

=0.8154

Reliability of standard system = [(R1+ (1-R1) ×R2)]5

= [0.9 + (1-0.9) × 0.9)]5

= 0.95099

Reliability of professional system = [(R1+ (1-R1) ×R2)]5

= [0.99 + (1-0.99) × 0.99)]5

= 0.9995

Total cost (Basic system) = 2000+ (1-0.8154) ×50000= 11230

Total cost (Standard system) = 4000 + (1-0.95099) ×50000= 6450.5

Total cost (Professional system) = 10000+ (1-0.9995) ×50000= 10025

Based on the above results, we find that standard system owns the second rank reliability level while the smallest cost, hence we recommend purchasing standard system under this situation.

b)

Based on information from this case, we can get that

 λ= 40 per hour

μ1( current time) = 1.2 min/ unit = 50 units/ hour

μ2( adding employee) = 0.9 min/ unit = 66.67 units / hour

According to the formula: L =λ / (μλ), we can get the following:

L 1 = = 4 hence the total cost for the current time = 4 × $ 31= $ 124

L2 = = 1.5

Hence the total cost for adding extra employees= 1.5 × $52 = $78

As adding extra employees will save more cost, we suggest the company should choose to add extra employees.

Q3

  1. Supposing the volume to produce the expected products mentioned by this case is x, then we can get the following equation:

(Total cost for labor intensive process) = $10,000 + $14 x

(Total cost for a more automated process) = $50,000+ $8x

(Total cost for full automated process)= $300,000 + $2x

And then we can get:

1) $10,000 + $14 x = $50,000+ $8x

x1= 6667

2) $10,000 + $14 x = $300,000 + $2x

x2= 24167

3) $50,000+ $8x = $300,000 + $2x

x3= 41667

Hence, we can get the total costs based on the value of x as follows.

1) When x1= 6667

= 10,000+ 14 ×6667 = $103338

= 50,000 + 8 ×6667 = $103336

= 300,000 + 2×6667= $ 313334

2) When x2= 24167

= $348,338

=$243,336

= $348,334

3) When x3= 41667

= $593,338

= $383,336

=$383,334

Based on the above data, we can then get the following chart for the total costs based on the rage of x.


From the above graphic, we can get that

1) When the range of production is below 6667 items, the total cost of labor intensive process is the lowest, which we should choose.

2) When the range of production is in the range from 6667 items to 41667 items, the total cost of a more automated process is the lowest, which we should choose.

3) When the range of production is larger than 41667 items, the total cost of full automated process is the lowest, which we should choose.

b)

Activity

Time (weeks)

Cost ($)

Crash weeks

($)Cost of crash

ActivityActivity PredecessorNormalCrashNormalCrash

Max

Per week

1

20

8

1,000

1,480

12

40

2

24

20

1,200

1,400

4

50

3

14

7

700

1,190

7

70

4

1

10

6

500

820

4

80

5

3

11

5

550

730

6

30

($ Crash-$ Normal)÷(Normal time Crash time)

ii) Based on data in this case, we can get the crash time for each path.

  1. crash time for path 1-4= (20-8) + (10-6) = 16weeks
  2. crash time for path 2 = 24-20 = 4 weeks
  3. crash time for path 3-5 = (14-7) + (11-5) = 13 weeks

Hence the maximum possible crash time for the network is 33 weeks

And the maximum amount of the net work:

Activity 1 crash 12 weeks cost $40 per week= $480

Activity 4 crash 4 weeks cost $80 per week= $320

Activity 2 crash 4 weeks cost $50 per week= $200

Activity 3crash 7 weeks cost $70 per week= $490

Activity 5 crash 6 weeks cost $30 per week= $180

Maximum amount = $1670

iii) Normal project cost = 1000+ 1200 + 700+ 500 + 550 = $ 3950

Crash project cost = 3950 + (480 + 320) + (490 + 320) + 200= $5560

Q4 a)

q4

 

The above table is the solution for Sendai Global Food LLC to determine the optimal shipments, with the minimized total transportation costs. And referring to the transpiration problems in this company, there are both supply constraints and demand constraints for us to manage. And at the same time, several mathematical relationships involved in the condition to ensure the fulfillment of the task should be taken into full consideration that the amount shipped into the distribution centers should equal the amount shopped out, which we will justify as follows (Russell & Taylor 2009).

I) The decision variables are in cells of the above table which are B3: D5 and C10 : E12

II) Supply constraints and demand constraints are as follows.

Supply:

  1. the constraints for Hamburg to ship products to three Japanese warehouse are F3 = SUM ( B3: D3)
  2. the constraints for Marseilles to ship products three Japanese warehouse are F4 = SUM ( B4: D4)
  3. the constraints for Liverpool to ship products three Japanese warehouse are F5 = SUM ( B5: D5)
  1. the constraints for Adachi to receive products from the three European locations are B6 = SUM ( B3: B5)
  1. the constraints for Otawa to receive products from the three European locations are C6 = SUM ( C3: C5)
  1. the constraints for Edagawa to receive products from the three European locations are D6 = SUM ( D3: D5)

Demand:

  1. the constraints for Odachi to ship products to three distribution centers are F10 = SUM ( C10: E10)
  1. the constraints for Otawa to ship products to three distribution centers are F11= SUM ( C11: E11)
  1. the constraints for Edogawa to ship products to three distribution centers are F12 = SUM ( C12: E12)

4) the constraints for Himeji to receive the products amount from three warehouse in Japan are C14 = SUM ( C10 : C12)

5) the constraints for Matsudo to receive the products amount from three warehouse in Japan are D14 = SUM ( D10 : D12)

6) the constraints for Adach to receive the products amount from three warehouse in Japan are E14 = SUM ( E10 : E12)

And from the above mathematical data we find the amount for the European locations to ship out the products equals the amount of the products shipped into the three distribution centers. They are all equals to 120,000 kg. That is to say, this solution is reasonable for Sendai Global Food LLC to determine the optimal shipments, which is about 120,000 kg. Meanwhile, we can also get the cost of this transportation which is in B21 = SUMPRODUCT (B3:D5, I3:K5) +SUMPRODUCT (C10:E12, J10:L12) = $ 58,019

b)

i) According to the method of liner regression, we can get the following data based on information offered by Magic Carpet.

Monthly carpet sales (y)Monthly construction permits (x)

xy

y^2

x^2

5

17

85

25

289

12

30

360

144

900

6

12

72

36

144

5

14

70

25

196

8

18

144

64

324

4

10

40

16

100

14

38

532

196

1444

9

20

180

81

400

9

16

144

81

256

16

31

496

256

961

88

206

2123

924

5014

And then we can get

== =20.6

===8.8

b=

=

= 0.4026

And then we can get

a = -b

= 8.8 – 0.4026 × 20.6

= 0.5064

As y = a + b x

When the 25 construction permits for new home are filed, we can finally get the final answer:

y = 0.5064 + 0.4026 × 25

= 10.57

By and large, when the monthly construction permits is 25, the monthly forecast for the carpet sales are about 10.57 (1000s yields).

ii) Correlation

Based on the above result, we can get the correction between the monthly sales and the new home construction as follows:

r=

=

=0.913

When the value of r near 1.00, we can get that there is a strong linear relationship between the monthly sales and the new home construction.

Q5

a) Based on information from New World Motel, we can get the following data:

C= Cost of overestimating demand or no-shows

= $ 100

C= cost of underestimating demand or no-shows

= $ 50

And P (N < X) ==0.33

And then we can get the following table to show the no-shows data:

No-shows

Frequency

Probability

P (N<X)

0

18

0.2

0

0.33

1

36

0.4

0.2

2

27

0.3

0.6

3

9

0.1

0.9

= 90 ∑= 1

Due to the optimal probability of no-shows falls between 0.2 and 0.6, we consider the number of no-shows for the room booking is less than or equal to 0.33. Hence, we choose 0.2, which indicates New World Motel had better 1 room.

b) Based on the information and data from the Tan family, we get the following data in this table below.

b

 

  1. assume the amount of these ingredients to produce the relish of Chow-Chow and tomato are in the following table
Ingredients Relish of Chow-chow Relish of tomato
Cabbage Xc1Xc2
Tomato Yt1Yt2
onionZo1Zo2

And then we can get the profit for produce the two types of relish:

Profit = $ 2.26/ 16 × (Xc1 + Yt1 + Zo1) + $ 1.95 / 16 × (Xc2 + Yt2 + Zo2)  

  1. as there are several constraints in the case we should obey as follows :

For the ingredients to produce relish of Chow- chow

Cabbage: ≥ 60%

Tomato: ≥ 10%

Onion: ≥ 5%

For the ingredients to produce relish of tomato

Cabbage: ≥ 10%

Tomato: ≥50%

Onion: ≥ 5%

And as the onion in the two types of relish is no more than 10%, then we get :

10% ≤ 10%

  1. Based on the above data and constraints, we can finally get the amount of these ingredients as the above table shows.

Hence we can get the profit

= $ 2.26/ 16 × (4608+ 2688+ 384) + $ 1.95 / 16× (192+1632 + 96)

= $ 1313.66

And the total demand

= (4608+ 2688+ 384) – 1.3 × (192+1632 + 96)

= 5184

  1. and the jars necessary to produce the two types of relish to maximize the profit are as follows:

Jars of Chow-chow = = 480

Jars of Tomato = = 120

Q 6

a) Based on the data in this case, we can get the following diagram and bar chart

Chart 1.0 is the best sequence for this company to complete the customers’ order as quickly as possible.

Chart 1.0 Sequence

CDBAE

 

Based on Johnson’s rule (Russell & Taylor), we find out job C and E own the smallest processing time “1”, hence we put C in the beginning of the sequence of this job and E in the end of the sequence of this job. And then, we find that job A and D own the second smallest procession time of “2” respectively in process 2 and process 1, hence we put A near the end of the sequence as shown in chart 1.0 and D near the beginning of the sequence as shown in chart 1.0. And then the only remaining job B can be put in the only available slot in chart 1.0. According to the Johnson’s rule (Russell & Taylor), we can infer that this sequence is the fastest one for this company to finish these jobs than other sequences.

From chart 2.0, we can get that

Chart 2.0 The bar chart in determining the finishing time

C

0

D

B

A

E

Process 1

Cutting

1

3

9

13

16

20

Process 2

Sewing

 

0

C

D

B

A

E

1

4

8

9

12

13

15

16

17

20

Idle time
FCFS

Sequence

Start

Time

Processing

Time

Completion

Time

Due DateTardiness

1

2

5

7

20

0

2

7

10

17

33

0

3

17

4

21

25

0

4

21

21

42

45

0

5

42

14

56

32

24

Total

143

24

Average

28.6

4.8

In chart 3.0, the bar chart shows clearly on the process one and process two. As in process two, a job can’t begin if another job in process one is till in its process, which is reflected in process two in which the idle time slots are these period. During the idle time, the jobs in process two have to wait until other job in process one is finished. Hence, we can finally get that the jobs in this question will complete in 17.

b)

I) FCFS:

As the starting time is June 2, the mean flow time by FCFS means is 26.6

II) DDATE

DDATE

Sequence

Start

Time

Processing

Time

Completion

Time

Due DateTardiness

1

2

5

7

20

0

3

7

4

11

25

0

5

11

14

25

32

0

2

25

10

35

33

2

4

35

21

56

45

11

Total

134

13

Average

26.8

2.6

 

As the starting time is June 2, the mean flow time by DDATE means is 24.8.

III) Slack

Based on this case, the slack for each assignment is calculated as the follows:

Job 1: (20-2)-5 = 13

Job 2: (33-2)-10 = 21

Job 3: (25-2)-4 = 19

Job 4: (45-2)-21 =22

Job 5: (32-2)-14 = 16

Slack

Sequence

Start

Time

Processing

Time

Completion

Time

Due DateTardinessSlack

1

2

5

7

20

0

13

5

7

14

21

32

0

16

3

21

4

25

25

0

19

2

25

10

35

33

2

21

4

35

21

56

45

11

22

Total

144

13

Average

28.8

2.6

 

As the starting time is June 2, the mean flow time by Slack means is 26.8.

IV) SPT

SPT

Sequence

Start

Time

Processing

Time

Completion

Time

Due DateTardiness

3

2

4

6

25

0

1

6

5

11

20

0

2

11

10

21

33

0

5

21

14

35

32

3

4

35

21

56

45

11

Total

129

14

Average

25.8

2.8

 

As the starting time is June 2, the mean flow time by SPT means is 23.8.

V) Summary

RuleAverage Completion

Time

Average

Tardiness

No. of Jobs TardyMaximum Tardiness
FCFS26.64.81 24
DDATE24.82.6 211
SLACK26.82.6 211
SPT23.8 2.8211
Best

Value

Based on this result, we can find that rule FCFS may produce the minimized No. of jobs tardy, but its maximum tardiness is 24 days, which is too big compared to the lowest level of 11 days. Hence, this rule isn’t the best choice. While compared the rest, DDATE rule may be the best choice for Mary, because it owns 2 jobs of tardy while the best value of average tardiness and average completion time compared to others. Hence, DDATE rule is the best choice for Mary to priorities her work.

 

Reference

Russell, R. S. & Taylor, B. W. 2009, Operations management: Along the supply chain, John Wiley & Sons, New Jersey,

Leave a Reply

Your email address will not be published.