Is Oracle Crystal Ball still relevant?

Is Oracle Crystal Ball still relevant?

Are Excel Simulation Add-Ins like Oracle Crystal Ball the right tools for decision making? This short blog deliberates on the pros and cons of Oracle Crystal Ball.
Author: Eric Torkia
0 Comments
Article rating: No rating
Decision Science Developper Stack

Decision Science Developper Stack

What tools should modern analysts master 3 tier design after Excel?

When it comes to having a full fledged developper stack to take your analysis to the next level, its not about tools only, but which tools are the most impactful when automating and sharing analysis for decision making or analyzing risk on projects and business operations. 

Author: Eric Torkia
0 Comments
Article rating: No rating
The Need For Speed 2019

The Need For Speed 2019

Comparing Simulation Performance for Crystal Ball, R, Julia and @RISK

The Need for Speed 2019 study compares Excel Add-in based modeling using @RISK and Crystal Ball to programming environments such as R and Julia. All 3 aspects of speed are covered [time-to-solution, time-to-answer and processing speed] in addition to accuracy and precision.
Author: Eric Torkia
0 Comments
Article rating: 3.8
Bayesian Reasoning using R (Part 2) : Discrete Inference with Sequential Data

Bayesian Reasoning using R (Part 2) : Discrete Inference with Sequential Data

How I Learned to Think of Business as a Scientific Experiment

Imagine playing a game in which someone asks you to infer the number of sides of a polyhedron die based on the face numbers that show up in repeated throws of the die. The only information you are given beforehand is that the actual die will be selected from a set of seven die having these number of faces: (4, 6, 8, 10, 12, 15, 18). Assuming you can trust the person who reports the outcome on each throw, after how many rolls of the die wil you be willing to specify which die was chosen?
Author: Robert Brown
0 Comments
Article rating: 2.5
Bayesian Reasoning using R

Bayesian Reasoning using R

Gender Inference from a Specimen Measurement

Imagine that we have a population of something composed of two subset populations that, while distinct from each other, share a common characteristic that can be measured along some kind of scale. Furthermore, let’s assume that each subset population expresses this characteristic with a frequency distribution unique to each. In other words, along the scale of measurement for the characteristic, each subset displays varying levels of the characteristic among its members. Now, we choose a specimen from the larger population in an unbiased manner and measure this characteristic for this specific individual. Are we justified in inferring the subset membership of the specimen based on this measurement alone? Baye’s rule (or theorem), something you may have heard about in this age of exploding data analytics, tells us that we can be so justified as long as we assign a probability (or degree of belief) to our inference. The following discussion provides an interesting way of understanding the process for doing this. More importantly, I present how Baye’s theorem helps us overcome a common thinking failure associated with making inferences from an incomplete treatment of all the information we should use. I’ll use a bit of a fanciful example to convey this understanding along with showing the associated calculations in the R programming language.
Author: Robert Brown
0 Comments
Article rating: No rating
RSS
All Posts Term: Statistics
17 post(s) found

Tolerance Analysis using Monte Carlo (Part 11 / 13)

How do Monte Carlo analysis results differ from those derived via WCA or RSS methodologies? Let us return to the one-way clutch example and provide a practical comparison in terms of a non-linear response. From the previous posts, we recall that there are two system outputs of interest: stop angle and spring gap. These outputs are described mathematically with response equations, as transfer functions of the inputs.

Introduction to Monte Carlo Analysis (Part 10 / 13)

In past blogs, I have waxed eloquent about two traditional methods of performing Tolerance Analysis, the Worst Case Analysis and the Root Sum Squares. With the advent of ever-more-powerful processors and the increasing importance engineering organizations place on transfer functions, the next logical step is to use these resources and predict system variation with Monte Carlo Analysis.

Probability Distributions in Tolerance Analysis (Part 4 / 13)

With uncertainty and risk lurking around every corner, it is incumbent on us to account for it in our forward business projections, whether those predictions are financially-based or engineering-centric. For the design engineer, he may be expressing dimensional variance in terms of a tolerance around his nominal dimensions. But what does this mean? Does a simple range between upper and lower values accurately describe the variation?

Algorithms and the New Millennium

Dr David Berlinski (2000) makes the historical observation that two great ideas have most influenced the technological progress of the Western world:

The first is the calculus, the second the algorithm. The calculus and the rich body of mathematical analysis to which it gave rise made modern science possible; but it has been the algorithm that has made possible the modern world. (Berlinski, p. xv)

Dr Berlinski concludes that:

The great era of mathematical physics is now over. The three-hundred-year effort to represent the material world in mathematical terms has exhausted itself. The understanding that it was to provide is infinitely closer than it was when Isaac Newton wrote in the late seventeenth century, but it is still infinitely far away…. The algorithm has come to occupy a central place in our imagination. It is the second great scientific idea of the West. There is no third. (Berlinski, pp. xv-xvi)

Source: Berlinski, D (2000). The Advent of the Algorithm: The 300-Year Journey from an Idea to the Computer. San Diego, CA: Harcourt.

Related Posts: Enter the Algorithm

Decision Warranties

According to Prof Ronald A Howard (1992):

Three of the warranties that I would like to have in any decision situation are that:
  1. The decision approach I am using has all the terms and concepts used so clearly defined that I know both what I am talking about and what I am saying about it;
  2. I can readily interpret the results of the approach to see clearly the implications of choosing any alternative, including of course, the best one; and
  3. The procedure used to arrive at the recommendations does not violate the rules of logic (common sense).

Plain and simple... Source: Howard, R A (1992), Heathens, Heretics, and Cults, Interfaces, 22(6), 15-27.

Risk versus Uncertainty

Prof Frank H Knight (1921) proposed that "risk" is randomness with knowable probabilities, and "uncertainty" is randomness with unknowable probabilities. However, risk and uncertainty both share features with randomness. The illustration below explains the relationship of the concepts better than words...

Source: Knight, F H (2002/1921), Risk, Uncertainty and Profit, Washington, DC: BeardBooks.

 

What is a model?

When using tools such as Excel, Crystal Ball or ModelRisk, it is very important to be able to translate a mental model to a mathematical one. Let me illustrate, when you think about your business, you often will think of abstract notions such as profit or margins. These are mental constructs because their are no physical representations of profit or margins (except a pile of cash) only mathematical ones.

RESEARCH ARTICLES | RISK + CRYSTAL BALL + ANALYTICS