RESEARCH ARTICLES | RISK + CRYSTAL BALL + ANALYTICS

Is Oracle Crystal Ball still relevant?
  • 28 August 2024
  • Author: Eric Torkia
  • Number of views: 144
  • Comments: 0

What tools should modern analysts master 3 tier design after Excel?

Decision Science Developper Stack

When it comes to having a full fledged developper stack to take your analysis to the next level, its not about tools only, but which tools are the most impactful when automating and sharing analysis for decision making or analyzing risk on projects and business operations. 

  • 26 August 2024
  • Author: Eric Torkia
  • Number of views: 107
  • Comments: 0

Comparing Simulation Performance for Crystal Ball, R, Julia and @RISK

The Need For Speed 2019
The Need for Speed 2019 study compares Excel Add-in based modeling using @RISK and Crystal Ball to programming environments such as R and Julia. All 3 aspects of speed are covered [time-to-solution, time-to-answer and processing speed] in addition to accuracy and precision.
  • 25 February 2019
  • Author: Eric Torkia
  • Number of views: 27510
  • Comments: 0

How I Learned to Think of Business as a Scientific Experiment

Bayesian Reasoning using R (Part 2) : Discrete Inference with Sequential Data
Imagine playing a game in which someone asks you to infer the number of sides of a polyhedron die based on the face numbers that show up in repeated throws of the die. The only information you are given beforehand is that the actual die will be selected from a set of seven die having these number of faces: (4, 6, 8, 10, 12, 15, 18). Assuming you can trust the person who reports the outcome on each throw, after how many rolls of the die wil you be willing to specify which die was chosen?
  • 6 November 2018
  • Author: Robert Brown
  • Number of views: 13870
  • Comments: 0

Gender Inference from a Specimen Measurement

Bayesian Reasoning using R
Imagine that we have a population of something composed of two subset populations that, while distinct from each other, share a common characteristic that can be measured along some kind of scale. Furthermore, let’s assume that each subset population expresses this characteristic with a frequency distribution unique to each. In other words, along the scale of measurement for the characteristic, each subset displays varying levels of the characteristic among its members. Now, we choose a specimen from the larger population in an unbiased manner and measure this characteristic for this specific individual. Are we justified in inferring the subset membership of the specimen based on this measurement alone? Baye’s rule (or theorem), something you may have heard about in this age of exploding data analytics, tells us that we can be so justified as long as we assign a probability (or degree of belief) to our inference. The following discussion provides an interesting way of understanding the process for doing this. More importantly, I present how Baye’s theorem helps us overcome a common thinking failure associated with making inferences from an incomplete treatment of all the information we should use. I’ll use a bit of a fanciful example to convey this understanding along with showing the associated calculations in the R programming language.
  • 28 October 2018
  • Author: Robert Brown
  • Number of views: 15005
  • Comments: 0
RSS
1345678910Last