Is Oracle Crystal Ball still relevant?

Is Oracle Crystal Ball still relevant?

Are Excel Simulation Add-Ins like Oracle Crystal Ball the right tools for decision making? This short blog deliberates on the pros and cons of Oracle Crystal Ball.
Author: Eric Torkia
0 Comments
Article rating: No rating
Decision Science Developper Stack

Decision Science Developper Stack

What tools should modern analysts master 3 tier design after Excel?

When it comes to having a full fledged developper stack to take your analysis to the next level, its not about tools only, but which tools are the most impactful when automating and sharing analysis for decision making or analyzing risk on projects and business operations. 

Author: Eric Torkia
0 Comments
Article rating: No rating
The Need For Speed 2019

The Need For Speed 2019

Comparing Simulation Performance for Crystal Ball, R, Julia and @RISK

The Need for Speed 2019 study compares Excel Add-in based modeling using @RISK and Crystal Ball to programming environments such as R and Julia. All 3 aspects of speed are covered [time-to-solution, time-to-answer and processing speed] in addition to accuracy and precision.
Author: Eric Torkia
0 Comments
Article rating: 3.8
Bayesian Reasoning using R (Part 2) : Discrete Inference with Sequential Data

Bayesian Reasoning using R (Part 2) : Discrete Inference with Sequential Data

How I Learned to Think of Business as a Scientific Experiment

Imagine playing a game in which someone asks you to infer the number of sides of a polyhedron die based on the face numbers that show up in repeated throws of the die. The only information you are given beforehand is that the actual die will be selected from a set of seven die having these number of faces: (4, 6, 8, 10, 12, 15, 18). Assuming you can trust the person who reports the outcome on each throw, after how many rolls of the die wil you be willing to specify which die was chosen?
Author: Robert Brown
0 Comments
Article rating: 2.5
Bayesian Reasoning using R

Bayesian Reasoning using R

Gender Inference from a Specimen Measurement

Imagine that we have a population of something composed of two subset populations that, while distinct from each other, share a common characteristic that can be measured along some kind of scale. Furthermore, let’s assume that each subset population expresses this characteristic with a frequency distribution unique to each. In other words, along the scale of measurement for the characteristic, each subset displays varying levels of the characteristic among its members. Now, we choose a specimen from the larger population in an unbiased manner and measure this characteristic for this specific individual. Are we justified in inferring the subset membership of the specimen based on this measurement alone? Baye’s rule (or theorem), something you may have heard about in this age of exploding data analytics, tells us that we can be so justified as long as we assign a probability (or degree of belief) to our inference. The following discussion provides an interesting way of understanding the process for doing this. More importantly, I present how Baye’s theorem helps us overcome a common thinking failure associated with making inferences from an incomplete treatment of all the information we should use. I’ll use a bit of a fanciful example to convey this understanding along with showing the associated calculations in the R programming language.
Author: Robert Brown
0 Comments
Article rating: No rating
RSS
All Posts Term: Monte-Carlo
13 post(s) found

Reducing Project Costs and Risks with Oracle Primavera Risk Analysis

.It is a well-known fact that many projects fail to meet some or all of their objectives because some risks were either: underestimated, not quantified or unaccounted for. It is the objective of every project manager and risk analysis to ensure that the project that is delivered is the one that was expected. With the right know-how and the right tools, this can easily be achieved on projects of almost any size. We are going to present a quick primer on project risk analysis and how it can positively impact the bottom line. We are also going to show you how Primavera Risk Analysis can quickly identify risks and performance drivers that if managed correctly will enable organizations to meet or exceed project delivery expectations.

.

 

Modeling Time-Series Forecasts with @RISK


Making decisions for the future is becoming harder and harder because of the ever increasing sources and rate of uncertainty that can impact the final outcome of a project or investment. Several tools have proven instrumental in assisting managers and decision makers tackle this: Time Series Forecasting, Judgmental Forecasting and Simulation.  

This webinar is going to present these approaches and how they can be combined to improve both tactical and strategic decision making. We will also cover the role of analytics in the organization and how it has evolved over time to give participants strategies to mobilize analytics talent within the firm.  

We will discuss these topics as well as present practical models and applications using @RISK.

Excel Simulation Show-Down Part 3: Correlating Distributions

Escel Simulation Showdown Part 3: Correlating DistributionsModeling in Excel or with any other tool for that matter is defined as the visual and/or mathematical representation of a set of relationships. Correlation is about defining the strength of a relationship. Between a model and correlation analysis, we are able to come much closer in replicating the true behavior and potential outcomes of the problem / question we are analyzing. Correlation is the bread and butter of any serious analyst seeking to analyze risk or gain insight into the future.

Given that correlation has such a big impact on the answers and analysis we are conducting, it therefore makes a lot of sense to cover how to apply correlation in the various simulation tools. Correlation is also a key tenement of time series forecasting…but that is another story.

In this article, we are going to build a simple correlated returns model using our usual suspects (Oracle Crystal Ball, Palisade @RISK , Vose ModelRisk and RiskSolver). The objective of the correlated returns model is to take into account the relationship (correlation) of how the selected asset classes move together. Does asset B go up or down when asset A goes up – and by how much? At the end of the day, correlating variables ensures your model will behave correctly and within the realm of the possible.

Copulas Vs. Correlation

Copulas and Rank Order Correlation are two ways to model and/or explain the dependence between 2 or more variables. Historically used in biology and epidemiology, copulas have gained acceptance and prominence in the financial services sector.

In this article we are going to untangle what correlation and copulas are and how they relate to each other. In order to prepare a summary overview, I had to read painfully dry material… but the results is a practical guide to understanding copulas and when you should consider them. I lay no claim to being a stats expert or mathematician… just a risk analysis professional. So my approach to this will be pragmatic. Tools used for the article and demo models are Oracle Crystal Ball 11.1.2.1. and ModelRisk Industrial 4.0

Excel Simulation Show-Down Part 2: Distribution Fitting

 

One of the cool things about professional Monte-Carlo Simulation tools is that they offer the ability to fit data. Fitting enables a modeler to condensate large data sets into representative distributions by estimating the parameters and shape of the data as well as suggest which distributions (using these estimated parameters) replicates the data set best.

Fitting data is a delicate and very math intensive process, especially when you get into larger data sets. As usual, the presence of automation has made us drop our guard on the seriousness of the process and the implications of a poorly executed fitting process/decision. The other consequence of automating distribution fitting is that the importance of sound judgment when validating and selecting fit recommendations (using the Goodness-of-fit statistics) is forsaken for blind trust in the results of a fitting tool.

Now that I have given you the caveat emptor regarding fitting, we are going to see how each tools offers the support for modelers to make the right decisions. For this reason, we have created a series of videos showing comparing how each tool is used to fit historical data to a model / spreadsheet. Our focus will be on :

The goal of this comparison is to see how each tool handles this critical modeling feature.  We have not concerned ourselves with the relative precision of fitting engines because that would lead us down a rabbit hole very quickly – particularly when you want to be empirically fair.

Excel Simulation Show-Down: Comparing the top Monte-Carlo Simulation Tools

Excel Simulation Show Down (Part 1) - Defining Inputs and Outputs

Over the last 3 months, we have seen 3 of the 4 major players in the Excel Monte-Carlo Simulation arena introduce new releases. We hear a lot of talk about which tool is best and the truth is there is no perfect answer – it’s a personal thing dictated by user skill, preference and need.

For this reason, we have created a series of videos showing comparing how each tool is used to apply Monte-Carlo simulation to a model / spreadsheet. Our focus will be on :

To keep the playing field level, we have used a simple additive model, which is simply defining a series of distributions (i.e. costs, budget items…), summing them up and analyzing the resulting sensitivity analysis. We have kept things simple, so we are not correlating any of the variables nor using any fancy math.

As you will see, there are definite differences AND similarities regarding how these packages tackle building a model. We are going to focus on those relating to inserting and copying input distributions as well as defining and analyzing model outputs. The objective is to compare the ease, usability and efficiency of each tool and give people the opportunity to choose for themselves which tool reflects their needs and preferences better.

Correlation and Impact on Monte Carlo Analysis Results (5/8)

All the top dogs in the Monte Carlo Analysis spreadsheet universe have distribution-fitting capabilities. Their interfaces have common elements, of course, since they rely on (for the most part) the same PDFs in their arsenal of distribution-fitters. There are important differences, to be sure. It is hoped this comparison will illustrate pros and cons from a practical standpoint. Before going over our scorecard between Crystal Ball and ModelRisk, there is one more very important capability category begging for review: Correlation.

Dealing with Uncertainty

Change is constant. Or so the saying goes. However, even change is ever-varying. So perhaps we should say: Change is constantly changing. As occupants of planet earth, we intuitively know this and yet strive to keep everything the same, at least those things that do well by us. Uncertainty derails the best of our plans, even uncertainties that we recognize up front.

Tolerance Analysis using Monte Carlo, continued (Part 12 / 13)

In the case of the one-way clutch example, the current MC quality prediction for system outputs provide us with approximately 3- and 6-sigma capabilities (Z-scores). What if a sigma score of three is not good enough? What does the design engineer do to the input standard deviations to comply with a 6 sigma directive?

Tolerance Analysis using Monte Carlo (Part 11 / 13)

How do Monte Carlo analysis results differ from those derived via WCA or RSS methodologies? Let us return to the one-way clutch example and provide a practical comparison in terms of a non-linear response. From the previous posts, we recall that there are two system outputs of interest: stop angle and spring gap. These outputs are described mathematically with response equations, as transfer functions of the inputs.

RESEARCH ARTICLES | RISK + CRYSTAL BALL + ANALYTICS