Simulation Modeling



Question one



Range is the difference between the maximum and the minimum values within a particular set of numbers. To calculate range, we subtract the lowest value from the maximum value. For example from the data we have we can calculate the range by taking (146-4) = 142.

Range explains how values in a given set vary from each other. Adding and dropping of courses should be done within the acceptable ranges.

Mid spread values and the inter-quartile range eliminate the outliers (extreme values that appear to be inconsistent with other values in the data set). Quartile range is obtained by subtracting the lower quartile from the upper quartile. The main objective of calculating range is to establish how numbers (values) vary from each other.


Question two


Step 1 Generate time randomly and set arrival time= between

Step  2 Initialize every output value

HARTIME = unload,    MAXHAR = unload, WAITIME = 0, MAXWAIT = 0     IDDLETIME=0. MAIN=0, DATAENT = 0

Step 3 Compute the finish time for emptying the ship

Finish = arrival + unload

Step 4  For i=2,3……..,n

Step 5 Generate random numbers of integer pairs drop and between over a given time period.

Step 6 Assume that the time clock starts at t=0. compute the ship’s arrival time

Arrival = arrive + between

Step 7 compute the difference between the arrival time of ship and the finish time of    dropping the ship that came before

Timediff= arrive – finish

Step 8  for time that is non-negative, te unloading facility is idle

Idle = timediff and wait =0

For negative timediff, ship must be in a waiting line before it is dropped

Wait = -timediff and idle = 0

Step 9  Compute the starting time  for dropping ship

Start= arrive + wait

Step 10 Compute the finish time for dropping the ship

Finsh = start + unload

Step 11 Calculate time in harbor for ship

Harbor = wait + unload

Step 12 Sum harbor, in total harbor time HARTIME for averaging.

Step 13 If harbor > MAXHAR then set MAXHAR = harbor, otherwise leave                                     MAXHAR as it is.

Step 14 Sum wait, the total waiting time WAITIME for averaging

Step 15 Sum idle, the total idle time IDLETIME

Step 16 If wait > MAXWAIT then MAXWAIT = wait otherwise we set HARTIME = HARTIME/n.                             WAITIME = WAITIME/n and IDLETIME = IDLETIME/finish




Question three

We collected data on  waiting time when adding or dropping a course. The data provided guidelines for effective planning and management in the school. From the data we developed a simulation and optimization model (OSM). The model had three important elements (Addiscott, 2007). First, its interface was user friendly and it gave students and school administration an easy time to operate it. The data provided a model that could be easily edited incase information about a particular student was wrongly entered. Secondly the model provided an appropriate waiting time that scheduled module that minimized the waiting time.


The model is comprised of four modules that can be described in the following sections.

MAIN sub model

This sub model directs the operation of the by the model by giving the students and the administration the ability to choose subsequent sub models and load the sample data in the computer.

DATAENT sub model this sub model enable the students to edit and enter their primary data to enhance the operation of one or two operating scenarios.  Primary data encompasses project site and functioning data, command area data, adding and dropping courses.


Question four


the steps below  can be used to generate accurate simulation models.


Computer simulation  are the best at comparing scenarios. A base model is created and calibrated so that it has similar data as the area being studied.  The calibrated model is therefore verified to ensure that the model is functioning as required basing on the inputs.  Once model is verified, the ultimate step is to validate it by comparing the outputs to previous data in the area of study. This is achieved by using statistical methods to ensure that we have appropriate R-squared value.


The process of calibration can be achieved changing the parameter available to change how the model functions and stimulate the process. A good example is the traffic simulation where the parameters include car flowing sensitivity, headway discharge and look away distance. These parameters influence the character of drivers such as how and when the driver takes should change from one lane to the other, the distance a driver should leave between his/her vehicle with the other vehicle (Addiscott, 2007).


Model verification

It is achieved by getting the out data from the model and compares it with what is expected in the input data. In traffic simulation, the volume of cars can be verified to ensure there is actual volume throughout the model (Vose, 2006).   Simulation model usually deals with model input differently and in most cases you may find out that vehicles do not reach their desired destinations. Furthermore, the traffic that wants to join the network may find it difficult to do so because of congestion.



Validation is achieved by comparing the finding with what is required basing on the previous information about from the area of study. Validation always produces results similar to previous results obtained from historical data. The results is verified using R-squared. This statistic measures the portion of variation that is accounted by the model. Large R value doesn’t mean that the model fits well. Graphical residual analysis is the other tool that is used to validate the model (Vose, 2006).  Whenever the output data is different from historical data and thus this means that there is an error in the model.

Question five

I will consider increasing the number of counselors because more counselors would mean more students would be served quickly. The waiting lines would reduce because, and the students would take minimum time to be served and congestion in the office will reduce.
























Addiscott, T. M. (2007). Simulation modeling and soil behaviour. Geoderma,60(1), 15-40

Vose, D. (2006). Quantitative risk analysis: a guide to Monte Carlo simulation modelling. John Wiley & Sons.

Virtual Assistant cheap


sampling distribution

  1. Construct the sampling distribution for the outcomes of an ordinary 6-sided die.


There are 6 possible outcomes in rolling an ordinary 6-sided die.

Since it is just an ‘ordinary’ die, it is safe to assume that each outcome is equally likely to occur. Since the total probability is 1, each outcome has a probability of 1/6 to occur.


Let X = outcome of rolling an ordinary 6-sided die.


The distribution of X would be


P(X = 1) = 1/6

P(X = 2) = 1/6

P(X = 3) = 1/6

P(X = 4) = 1/6

P(X =5) = 1/6

P(X = 6) = 1/6


The sampling distribution of the outcome can also be expressed as

P(X = x) =1/6, for x = 1,2,3,…,6

P(X = x) = 0, otherwise.


  1. What is the mean, variance, and standard deviation of this sampling distribution.


The mean of X is equal to

= 1(1/6) + 2(1/6) + 3(1/6) + 4(1/6) + 5(1/6) + 6(1/6) = 3.5.


The variance is equal to


= 35/12

≈ 2.9167 [4 decimal places]


The standard deviation is equal to the square root of the variance.

1.7078 [4 decimal places]

Subjective, Empirical, and Computational Probability for Coin Toss Experiment






Subjective, Empirical, and Computational

Probability for Coin Toss Experiment







Your Name Here

Your Class Here

June 19, 2015






This simulation is designed to demonstrate the difference between subjective, empirical, and computational probabilities. A simulation was used that records the result of flipping a coin ten times concentrating on the number of heads obtained. This simulation was run sixty times and the results were entered into an Excel spreadsheet for analysis. First subjective probabilities were generated from educated guesswork. Then empirical probabilities were generated based on the results of the simulations. Finally computational probabilities were calculated using the binomial distribution formula. Both the subjective and the empirical probabilities were compared with the computational probabilities in order to emphasize the differences.

To begin with there are several questions regarding the subjective probabilities associated with the eleven possible results. Since five heads will occur more frequently than any other combination, it should be significantly greater than the average value of 9.1%. The assigned value for obtaining five heads is 22%. Since the binomial distribution is symmetric the probability of getting three heads will be the same as the probability of getting seven heads. Knowing that the total number of heads must be between zero and 10 and also that the sum of all the percentages must equal 100% exactly these were assigned a subjective probability of 12%. Again since the binomial distribution is symmetric the probability of getting no heads will be the same as getting all heads. The subjective probability assigned to both of these results was 1%.

From this point the remainder of the subjective probability chart was filled out knowing that the distribution was symmetric, each value had to be between 0 and 1, and the sum of all the probabilities had to be exactly 1. After a little bit of experimentation the table at the top of the following page was generated:

X = Heads 0 1 2 3 4 5 6 7 8 9 10
Prob (X) 0.01 0.02 0.05 0.12 0.19 0.22 0.19 0.12 0.05 0.02 0.01

Examination by eye confirms that the distribution is symmetric and each value is within the allowed range. For the final requirement it is readily confirmed that

2 (0.01 + 0.02 + 0.05 + 0.12 + 0.19) + 0.22 = 1.00

and an acceptable subjective probability distribution has been generated.

Assuming that a fair coin is being used, the probability of getting heads on any toss will be 0.50 as will be the probability of getting tails. This means that over an extended period of time half of the tosses are expected to be heads and half of them are expected to be tails. As a result for a total of 600 tosses the expected value for heads is 300. Also since the probabilities of success (heads) and failure (tails) are equal this means that the probability of getting three heads will be exactly equal to the probability of getting three tails. Finally, the probability of getting three heads is exactly equal to the probability of getting seven tails. This is due to the fact that this is a binomial distribution and the only possibilities are heads or tails. If exactly three heads are generated then the other seven tosses must have resulted in tails.

The first graph presented for discussion is a line graph showing the evolution of cumulative heads percentage over all 60 simulations. It would be expected that this cumulative percentage would approach the success probability over a long enough timeline. Since the probability of success in this case (assuming a fair coin) is 0.50 the line graph should approach this value with decreasing fluctuations as the number of simulations increases. This line graph is presented at the top of the following page:

The sharp jump at the beginning simply represents an abundance of heads after a short number of tosses. Around the 25th toss the cumulative probability returned to its expected value and then dipped below 0.50 showing an overall abundance of tails. As expected the line graph continued to fluctuate around 0.50 with the degree of fluctuation decreasing as the total number of simulations increased.

The second graph is a comparison of the subjective probabilities to the computational probabilities calculated. The subjective probabilities can be thought of as a gut instinct while the computational probabilities are exact based on the binomial distribution formula. This graph is presented at the top of the following page:


The subjective probabilities are shown in blue and the computational probabilities are shown in red. For this particular subjective probability distribution the central values were underestimated while the outer values were overestimated. Notice that both distributions are symmetric as would be expected. The overall logic error in the subjective probability distribution appears to be a misunderstanding of how quickly the probability decreases as the number of successes deviates in either direction from the expected value of five. Also notice that the standard deviation of the subjective distribution is larger than that of the computational distribution.

The final graph is a comparison of the empirical probabilities to the computational probabilities. The empirical probabilities are determined from the sixty simulations: empirical probability distributions are expected to approach the computational distribution as the number of trials increases. Again this graph is presented at the top of the following page:


Interestingly enough the empirical distribution also appears to underestimate the computational distribution for the central values and overestimate it for the outer values. Notice here that the empirical distribution is no longer symmetric, and there is no reason to expect that it would be. It appears that sixty simulations are enough for the empirical distribution to be recognizable compared with the computational distribution. In other words the empirical distribution is a reasonable approximation but is not exact.

In conclusion this experiment is very good for showing the differences between the three types of probability distributions. It is simple to execute and the fact that the probabilities of success and failure are equal make the subjective distribution easier to analyze. All three distributions are relatively similar and all of the results were as expected.



The first thing that I learned was that there is actually some reasoning that goes into determining a subjective distribution. Before this experiment I felt it was a mostly useless exercise, especially if the computational distribution was readily available. Now I realize that with a bit of logic it is possible and actually not too difficult to generate a reasonable subjective distribution. The advantage of this is that for situations where the computational distribution is more difficult a ballpark idea can be gathered with relative simplicity.

The next thing that I gained from this project is a better interpretation of the line graph for cumulative percentage of heads. I knew that it would approach the expected value, but I never really thought about what the fluctuations really meant. No I understand that when the cumulative probability is greater than the expected value that there is an abundance of successes while when the cumulative probability is less than the expected value there is an abundance of failures. The last thing that I learned was that a relatively small number of simulations can generate a reasonable empirical distribution. With eleven different possible outcomes I would have thought that considerably more than sixty simulations would have been needed.

On a slightly different note I also learned some new functions for the Excel spreadsheets. The most interesting one to me was the command to generate the exact probabilities for a binomial distribution. All the other functions I used were familiar to me, but it was good practice to use them again. I feel like this project helped me learn and reaffirm quite a bit of knowledge related to probability distributions and the use of Excel to analyze them.



Toss Number Cumulative Total Coins Cumulative
Number Heads Number Heads Tossed Percent Heads
1 5 5 10 0.5000
2 7 12 20 0.6000
3 5 17 30 0.5667
4 6 23 40 0.5750
5 4 27 50 0.5400
6 7 34 60 0.5667
7 3 37 70 0.5286
8 4 41 80 0.5125
9 6 47 90 0.5222
10 8 55 100 0.5500
11 6 61 110 0.5545
12 5 66 120 0.5500
13 6 72 130 0.5538
14 5 77 140 0.5500
15 3 80 150 0.5333
16 3 83 160 0.5188
17 7 90 170 0.5294
18 5 95 180 0.5278
19 6 101 190 0.5316
20 2 103 200 0.5150
21 6 109 210 0.5190
22 5 114 220 0.5182
23 3 117 230 0.5087
24 2 119 240 0.4958
25 6 125 250 0.5000
26 2 127 260 0.4885
27 8 135 270 0.5000
28 4 139 280 0.4964
29 3 142 290 0.4897
30 7 149 300 0.4967
31 5 154 310 0.4968
32 3 157 320 0.4906
33 4 161 330 0.4879
34 6 167 340 0.4912
35 9 176 350 0.5029
36 5 181 360 0.5028
37 7 188 370 0.5081
38 6 194 380 0.5105
39 4 198 390 0.5077
40 2 200 400 0.5000
41 5 205 410 0.5000
42 8 213 420 0.5071
43 4 217 430 0.5047
44 5 222 440 0.5045
45 5 227 450 0.5044
46 6 233 460 0.5065
47 8 241 470 0.5128
48 7 248 480 0.5167
49 4 252 490 0.5143
50 4 256 500 0.5120
51 0 256 510 0.5020
52 4 260 520 0.5000
53 7 267 530 0.5038
54 6 273 540 0.5056
55 5 278 550 0.5055
56 2 280 560 0.5000
57 3 283 570 0.4965
58 5 288 580 0.4966
59 4 292 590 0.4949
60 7 299 600 0.4983


Table 1.

Simulation number, Total number of heads, Cumulative number of heads
Total number of coins tossed, Cumulative percentage of heads



Number Subjective Actual Empirical Computational
Successes Probability Successes Probability Probability
0 0.010 1 0.017 0.001
1 0.020 0 0.000 0.010
2 0.050 5 0.083 0.044
3 0.120 7 0.117 0.117
4 0.190 10 0.167 0.205
5 0.220 13 0.217 0.246
6 0.190 11 0.183 0.205
7 0.120 8 0.133 0.117
8 0.050 4 0.067 0.044
9 0.020 1 0.017 0.010
10 0.010 0 0.000 0.001
Sums 1.000 60.000 1.000 1.000



Table 2.

Number of successes, Subjective probability, Actual successes

Empirical probability, Computational probability


The role of the Criminal Grand Jury



Brenner,S., & Shaw, L. (2003). What does a grand jury do. Retrieved June 22, 2015, from

The article by Susan Brenner and Lorry Shaw clearly explains the work of grand jury. These authors expound the grand jury roles of investigating and bringing charges. They also trace the history of grand jury since the time it was referred to as “the people’s panel” or the “voice of the community”. Back during this time, it was a way of giving individuals sense into government affairs so that everyone would understand the two perspectives: community conditions and/or investigating crime and bringing charges against persons who might have committed a crime. This article is important because it explains how the bringing of charges occur in that the grand jurors first listen to the evidence and then decides whether it can establish a probable cause that makes them to believe that the person who is about to be charged by the prosecutor has committed the crime(s).on the investigating part the grand juries does the investigation either as a chastely separate function or as a part of carrying out criminal charges.

Legal Information Institute. (1992). Rule 6. The Grand Jury | Federal Rules of Criminal Procedure | LII / Legal Information Institute. Retrieved June 22, 2015, from

This article by Cornell University law school explains grand jury in various perspectives. One, the summoning of a jury is done either in general or by alternate jurors, it is done in general when public interest requires it to do so, while on the other hand, alternate jurors takes the task if the court choses to select alternate jurors from a grand jury. Secondly, it explains an objection to a grand juror or grand jury where it is stated that, if a juror is not legally qualified, he/she can be challenged either by a defendant or the government. A grand jury at large can also face these challenges. An objection may also rise if a motion is passed to dismiss the indictment. On the basis of objection of either an individual juror or a grand jury. Lastly, this article explains how a juror appoints a deputy foreperson and foreperson, who should be present during the session and finally how the recording and disclosing of the proceedings is supposed to be done.

Lippman, J. (2009). Retrieved June 22, 2015, from

In this article, Jonathan Lippmann who is a chief judge explains that the New York and United States constitution has an established grand jury which has the authority to decide whether or not a person should be formally accused of having committed a crime. A further expounding of the grand jury shows that it is an important part of criminal justice system as it is a cross-section of the community which has been designed to both protect the rights of people against unfounded accusations and also upholds the laws of land as it indicates the persons who are alleged to  have committed crimes. This article further explains that even though a jury service might be at times inconveniencing interrupting one’s business and personal live, it is one of the unique privileges that citizens enjoy. Lastly, jury service is seen to not only have civic responsibilities but is also an opportunity to participate in the justice systems.




Vance,Jr, C. R. (2014). Criminal Justice System: How It Works | The New York County District Attorney’s Office. Retrieved June 22, 2015, from

This article is a relevant one because it gives a deep understanding of what grand jury is, the roles and responsibility that are plated by grand jury and finally how the proceedings should be carried out whether these proceedings should be public or not. In the explanation of a grand jury, it clearly states that unless a defendant consents, every felony case should be handled by grand jury which is comprised of approximately 23 members  who hear the presented evidence and the take various measures concerning it including carrying out investigations. After this, they can direct the filling of the information given by the prosecutor, can vote an indictment, issue a report or direct the removal of a certain case to family court. Finally, about the grand jury proceedings, this article clearly shows that the proceedings should be secret and can only be presented to specific authorized persons.











Brenner,S., & Shaw, L. (2003). What does a grand jury do. Retrieved June 22, 2015, from

Legal Information Institute. (1992). Rule 6. The Grand Jury | Federal Rules of Criminal Procedure | LII / Legal Information Institute. Retrieved June 22, 2015, from

Lippman, J. (2009). New York State Unified Court System Grand Juror’s Handbook. Retrieved June 22, 2015, from

Vance,Jr, C. R. (2014). Criminal Justice System: How It Works | The New York County District Attorney’s Office. Retrieved June 22, 2015, from