This is a special guest blog post by Stefan Bengtsson. Stefan has already written three articles prior to this post, and although it is not a requirement, we do recommend reading those before you dive into this one. The previous posts are:
In this post, Stefan discusses a number of facets of Simulation-Based Management, a term he coined to package his views on the matter. He starts with the big WHY? about simulation and works all the way to whether simulation should be outsourced in an organization while touching on many important topics along the way.
This will be my fourth and final blog post - at least this time around. I here want to address the big WHY!
Why do we bother with simulation?
What is the purpose?
With what glasses should we view these possibilities and this competence field?
I feel the focus tends to tilt too much towards the technology - the How - thereby sometimes losing sight of this even more important question. This is where we ought to start - and end!
My first post was about How, to discuss blocks. My second addressed more the When, trying to argue that a vast amount of opportunities to add value with simulation is missed, because of a flawed view (I claim) of the role of data. The third post was back at How, but now on a higher level - questioning the paradigms as such. And now I hope to tie it all together. Wish me luck ...!
Systems Thinking
In my previous blog post about data, I emphasized that dynamic modeling is 100% about the logic of the system - and 0% about data. A slide I have used for a long time in almost any presentation related to simulation, is the one shown below. It shows a model that I attribute to Professor Michael C. Jackson and I use it to motivate why simulation models are so important and can add so much value. The model points out that when we think about systems, we need to take at least two dimensions into account:
- how complex is the system?
- how much the stakeholders agree or disagree in relation to the system (when it comes to objectives and means to reach the objectives).
The more complex the system is (the more we move downwards in the model) and the more there is disagreement or conflicting perspectives (the more we move rightwards in the model), the more challenging the situation is.
Prof. Jackson points out that attitudes and methods that are suitable for challenges and systems that are Simple-Unitary seldom are appropriate when we move to the other boxes - and lesser so the further away we move. He calls these methods Hard systems thinking - more mathematically and engineering-based methods.
My obvious conclusion is that dynamic modeling and simulation can handle all the boxes. Why?
Well, Hard systems thinking really consists of methods that provide an answer, that have an "opinion". These methods are prescriptive, prescribing an answer. Simulation does not have an opinion - it is a descriptive method. All we do is to according to our best efforts describe the system under consideration so that we can understand it - and the consequences of various scenarios - better. Whether the output from our simulation is good, bad, impossible, unacceptable, or whatever, that is a judgment we make as decision-makers. The model, method, or simulation run does not care - it just describes the logic of the system and the consequences caused by our simulation (given a specific scenario in terms of parameter values).
This descriptive trait is what makes simulation the preferred alternative (or rather it should be), the more complex an issue is, the more perspectives that clash, and the less data we have ...! And in the real world - out of sheltered workshops and laboratory inspired environments - we hardly ever stay in the safe Simple-Unitary box. That box is only relevant for those that stay on the cliff - to make another parallel to my post about data.
Professor Jackson does not really (to my knowledge) relate his conclusions to simulation, more than marginally. But I claim he should - since there is most likely no competence field that even comes close to simulation in adding value when we address system challenges. He mentions System Dynamics (SD), but not really the full competence field. SD adds the system perspective to an issue - and that is good. SD adds the dynamic perspective to an issue - and that is also good. But SD really only adds value if you do not already have competence in dynamic modeling - because if you do, you know that the other paradigms (not that we should think in terms of paradigms ...!) have far more to offer when it comes to modeling. So given a full understanding of dynamic modeling, there is enormous value to add when we think of or focus on systems - independent of which box we are situated in, given the model above.
Simulation-Based Management
In 2006 I heard that the US military used the expression Simulation-Based Acquisition (SBA), implying that simulation should be used to evaluate major acquisitions, decreasing the risk to make poor decisions. Excellent and good reasoning was my reaction. I did not really dig down into what this implied more practically, but I liked the way they had packaged sound reasoning with a good expression.
I also thought, why not take it further, the whole way? The value of what we can do with virtual realities created with dynamic models goes far beyond acquisition as a phenomenon. I started to play around with various alternatives, like Simulation-Based Change, Simulation-Based Strategy, or Simulation-Based Decision. In the end, I fell for Simulation-Based Management (SBM). I googled a bit and didn't find any real usage of this expression, and therefore decided to start using it, to package my view of simulation and "productify" it. What do I imply with SBM? Well, many things I guess:
1) My philosophy when it comes to simulation
This post is about how we think about simulation - the philosophy - and SBM summarizes and packages my views. Some of these are probably fairly generic, shared by you and many others, but some probably make me stick out a bit. I most likely see a larger potential than most and I also probably put simulation much higher up, from a hierarchical point of view, than most. This is most likely related to the fact that I am no modeler, I am a "manager". I was a modeler roughly 30 years ago - and you saw a picture of that modeler in my first blog post.
After having started my career working for five years with simulation modeling, I then for more than 10 years worked in various types of management and leadership roles (unrelated to simulation) - and also studied management (Business Administration). When after this I returned to the field of simulation, I could look at these possibilities with other eyes - now more "from above". So the start of my career - learning modeling through SIMULA/DEMOS - as well as my management profile has formed me and my views related to simulation.
2) An analogy with prototype thinking
In product development, having a phase in the development process called prototyping is for many obvious. Why is it not just as obvious to have that phase in other circum-stances - development of business, market, operations, organization, or supply chain? It should be, since these development processes usually are more complex, with more perspectives involved, than the development of the most advanced product. Given this, I have long ago decided to use the prototype analogy as one of the more important ones to communicate SBM. I came up with the picture above back in 2008, waiting to go into a meeting with county representatives involved in healthcare. I have redesigned it over the years, but the core of it is the same. I wanted to compare with how many were used to think about change and improvement initiatives, using the PDSA cycle (Plan Do Study Act; Deming). This is a sound way to think - but also pretty basic and obvious. The problem with PDSA is that there is often a need to rotate through the cycle many times before you reach the objectives you want. I wanted to emphasize that using dynamic models and simulation in the development process often could help you reach these objectives faster (climb higher on the ladder). And I wanted to illustrate that iterations were a key aspect of the process - iterations to develop the model, iterations when using the model (simulating), and iterations in the improvement/change efforts as such. So I use that (or similar pictures) as a logotype for SBM.
3) An indication of the level we should move our mindset to
I also - with the M in SBM - want to raise the level when it comes to simulation. This is not a "technical competence" (or at least not just a technical competence), it is very much about management competence. More technically tilted issues can of course be relevant, but they are just subsets of the whole vast field of opportunities where simulation can add value. I think a way to realize this, is to list some of the outputs we get from dynamic modeling and simulation. We get an understanding of:
the big picture and systems
consequences of various alternatives and scenarios when making decisions
the impact of processes, resources, decision-making logic, ... on the performance of the system
how different indicators are in conflict with each other and that there are "always" both pros and cons to consider
causality, what-if, and scenario thinking in more general terms
...
4) Dynamic modeling => management competence
What I listed in the previous bullet point makes out a pretty big chunk of what management is - or at least in many cases should be! And working with dynamic modeling gives the modeler - and others directly involved - an understanding of this that is quite hard to get through other career paths! But to capitalize on the management level of simulation, we must understand how to handle this from an organizational point of view. I will come back to this later in the post. I started my career working 5 years with modeling and simulation as a consultant - and then directly started to work on a high level with production, logistics, change management, organization, and operational effectiveness. After those 5 years, I probably knew as much - or more - about processes, production, supply chain, ... as I would have if I had worked for 20-40 years with this "in real life". Working virtually with issues allows you to get a massive leverage experience-wise compared to real life. You have time to make so many more mistakes (that you hopefully learn from) designing virtual realities and are forced to understand the core of whatever issue you are confronted with. At least that is, given that you as a modeler are formed by a modeling platform that trains you in thinking and understanding (which should not be taken for granted, something I raised in my first post, about blockification). A pilot is trained roughly 80% of the education time in a simulated context (be happy for this ...! imagine the alternative - sending up rookies in the air!). And in the same way, we can learn so much more about all these management-related issues in the same given time, if we are forced to dynamically model the businesses, systems, processes, or issues. We are forced to grasp the core of it all - because otherwise our flaws are revealed when we simulate! So if I ever - in this or next life - become in charge of management education, the most important component in several of the courses would be - somehow - dynamic modeling and simulation!!!
What values do we add?
For the last 1-2 years I have had reasons to consider and think about simulation and dynamic models in the context of education and the school system. Given what I know, I was and am convinced that dynamic models have a lot of value to add (to a large extent unused) when it comes to providing a new tool to build lectures around. I even packaged my thoughts by using Simulation-Based Lecturing as an explanatory tag (of course inspired by SBM)! So I asked myself, how do we explain dynamic modeling if we compare it with mathematical modeling? I landed in four perspectives. We add:
The Time perspective: Consideration and respect for the passage of time, one of the dimensions explaining that the models are dynamic.
The Stochastic perspective: Possibility to actually capture variation in our model - and let the variation "live". This is another dimension of being dynamic. In mathematical models, we just capture variation with static numbers (like standard deviation).
The System perspective: We can with our models capture the whole in a way we never even get close to with mathematical models.
The Visualization perspective: We can visualize the course of events, output, input, the system logic, ... with the help of the previous three perspectives.
Another way to compare mathematical and dynamic models is to conclude that the former category gives quantifiable entities as output whereas the latter produces a course of events (or at least development over time).
Why do we then bother with our dynamic models? In my SBM logotype picture, I claim there are three main reasons - to Visualize, Analyze, and Learn (or Train). I claim that we must understand that sometimes a project will have a mixture of all three reasons as the objective, but more often 1-2 will dominate. I gave the example of using models in school to illustrate that here the focus is a combination of Learn and Visualize - but not so much Analyze. The teaching/learning is the end objective, but visualization is a critical means to reach that end. The visualization end adds value to enable the teaching/learning. And the same really applies when the analysis is the key objective. I have claimed before that analysis or "correct results/output" never should be seen as the end result. The true end result and objective should be that the stakeholders understand and accept the conclusions and messages of a simulation project - and to achieve this, correct output is not enough. We need to make the stakeholders accept this output - and therefore visualization quite often is needed. It is a means to create understanding, to create trust, to create acceptance.
One of the modeling projects I was involved in, where the positive impact was most obvious, was a hospital-to-be. A very high-profile hospital in Stockholm, aiming to be state-of-the-art, with High Jingo political stakes at hand. Here my issue to address was given by the question "Can you simulate whether we think and plan correctly? We need answers in a little bit more than 1 month's time.". My answer was of course - "Sure!". Do you think the analytical end was most important here? Definitely not - not even close to. The visualization was - by far!
The visualization of the challenge as such, the issue, the relevant parameters, some relevant indicators, and a very crude description of what the logic (the processes) in a whole hospital looked like. Those involved (among others, very senior medical doctors) of course wanted to throw hundreds and thousands of data posts on me, in most cases totally irrelevant. Why was this irrelevant? For a number of reasons:
Having a project focusing on the future, with a high level of uncertainty involved, will not gain from having a very high level of detail for some of the input - since other input often will have extreme uncertainties. It does not hurt the quantitative results as such - but it risks hurting the expectations related to the certainty of the results. If we want to calculate x+y and we know that x = 12.3456, but all we know about y is that it is between 5 and 15, then we should not communicate the answer as 22.3456 (just because the "expected y value" was around 10). We should say for example that the answer is in the interval 17.3 to 27.4. Quite often, the uncertainty of the results is more important to communicate than any "average". I often get mad when I see forecasts communicated with exact figures. The uncertainty should always be indicated - so a forecast should often rather be presented as an interval.
In this case, the data they wanted to give me, to a large extent was historical data. And thereby, for a project looking at the future of a hospital that still just was a hole in the ground, this input was to a large extent irrelevant. It was the best they had - making forecasts - but the level of detail given should of course be kept at a low level.
And even if the data was to some extent correct and partly relevant (the past was representative of the future), it was to a large extent irrelevant anyhow. Because quite a lot was "output data", like the time a patient had been cared for, given a specific type of need. Historical output data should "never" be used as input data representing a future - because this data is an effect of the system state, historically. So it is "tainted" by a whole bunch of irrelevant factors (when it comes to decision-making for the future). We will have factors complicating matters in the future too - but other factors. And those we will hopefully partly capture with the logic of our simulation model. This is why I in a previous post called historical data (especially output data) sunk information.
In this project, I was able to moderate a meeting with the top decision-makers in the county, and more or less say "You are totally off the charts. Start addressing your severe issues, since you are not even close to an acceptable plan right now.". I did this without even having gotten all the input I had asked for since I could visualize the problem, make them understand that I understood the issue and show the dynamics. Exactly what values various indicators ought to have is often fairly uninteresting (and often impossible to know anyhow). It is the magnitudes, the dynamics, and the relative comparison between different scenarios that is. That is why I - also in a previous post - pointed out that simulation is about relative thinking (not absolute). Here the combination of expected/wanted patient care production (the number of patients/cases handled over a year), the resource levels planned (especially related to beds and care places), and the care time needed (to give the wanted quality of care) just created an unsolvable equation. That there was a problem with the number of care places had been pointed out by others for a long time - but it was my arguing, with the help of the model and the visualization, that finally contributed to that action being taken!
So under this headline, I just wanted to emphasize that we must consider all the three deliverables simulation can give us - to Visualize, to Analyze, to Learn. If we only focus on Analyze, we will limit the potential to do good severely. Most probably would not even have contemplated accepting the hospital project I described above. Many would have thought that the data and input were so uncertain and to a large extent nonexistent. Well, for me that just makes dynamic modeling even more interesting to use, since I know that value can "always" be added, in one way or another. Simulation is not about handling certainty - it is about helping out in situations with uncertainty! It is about supporting decisions and change, not creating quantitative results.
A form of art
I am getting closer to the end. Most likely I could write at least 10 times as much as I have so far - but you would just get tired of me. Below I show a picture of the various analogies I use when I explain and discuss simulation. In different situations, one is more appropriate to use than another, so I think it is important to have a small portfolio.
All the parallels above are relevant, to varying degrees. There are probably other relevant ones, but these are the ones I use. If I should choose two comparisons, it would probably be the prototype (that I use to symbolize SBM) and the dynamic formula. The latter makes it possible to compare to stuff that most can relate to - the formulas in mathematics, physics, science, and economics (that are static, not dynamic). Thinking in terms of a dynamic formula also makes it easier to see the potential - the full scope of dynamic modeling. In a way, I think we can see the field of dynamic modeling and simulation as a parallel competence field to mathematics. Mathematics is a "static science", using static models and static formulas. Our field is a "dynamic science", using dynamic models and dynamic formulas. That way we also realize the necessity of a paradigm-free approach. Just as we need Algebra, Geometry, Trigonometry, Calculus, ... as tools in our mathematical toolbox, we need all the paradigms together to make all the dynamic tools available. I intentionally wrote dynamic science to be able to correct myself - because we are really talking about a dynamic art form, a very creative one that demands artists with the talent to create worthwhile models!
The organizational challenge
My finishing subject will - split up under three headlines - be a more managerial challenge. I think we partly - or even radically - block the potential of simulation by handling and organizing this type of competence in the wrong way. I will split the discussion up into two parts
- how we organize the projects
- how we organize more permanently (if at all) in a company, business, or other types of organizations.
Roles in a project:
For starters, I think projects are quite often organized in a way that contributes to failure. In several cases, this does not matter, but in quite a few the project pays a price. I claim that if you have a project with a Modeler handling one end and a Customer/Manager (without more than a rudimentary understanding of modeling, if even that) being the recipient, then you often have a problem. Because there is a big gap - even a chasm - between the ”Model world” and the ”Real world”. And the most important challenge in a project is often to bridge that gap. The "real world end" will "never" understand the possibilities, challenges, and quirks related to dynamic modeling well enough to judge what should be done. That implies that the "modeling end" must understand the needs of the business/customer/stakeholder well enough to lead the process - because otherwise, you will often not succeed in bridging the gap. So the modeling end needs two roles (at least) - not one. You need a Modeler and you need a Bridger (with a high level of understanding of both modeling and the real-world challenges). Without this, quite a few projects will produce well-designed (at the best) models technically that unfortunately are irrelevant from the stakeholder end. So the Bridger must ensure that the stakeholder's real needs are addressed, not the needs the stakeholder expresses her-/himself (the Bridger must look beyond that). And the Modeler and Bridger roles can quite seldom be handled by one individual.
Organizational packaging:
Some organizations - but far too few - have realized that competence related to simulation can have value and that it is a good idea to have that competence internally. Well done! But even then, they have hardly ever realized the full scope of what you can do with this field of competence. A key factor is which headline you put the competence below. In Sweden, if you find courses about simulation, especially in the academic system, you more or less inevitably find it under the headline Engineering (or Technology) - and then you have by definition drastically limited the scope related to what value it is possible to add. In many other countries, you find related subjects rather under the headline Management - and then you have at least given yourself the opportunity to have a wider scope. So the positioning of simulation as a phenomenon is important because it affects the scope you have and how you think about the opportunities. And the same goes for an organization. We can probably agree that dynamic models can add value within Operations/ Production/Logistics, but also when it comes to Marketing/Sales, Accounting/Finance, Human Resources, Strategy, Business development, etc. But to support all these functions, you need to put the competence within an organizational unit that spans over the whole of the organization. In reality, you usually find these teams tucked down within Operations - not by necessity with a scope that even covers the whole of the operational challenge.
So one end of the issue is that the potential to contribute is hampered if the competence is not positioned high up in the organization. The other end is that this also risks limiting the competence level of the individuals within the team. I firmly claim that modeling competence improves if you are trained in addressing many widely different types of problems, compared to if you just have to focus on a more narrowly defined to-do list. So if you have two teams (with comparable talent levels for the individuals), one working with all types of challenges within an organization and the other just working within the function Operations, I claim that the first team in average will perform clearly better - also when addressing operational issues. Width drives competence when it comes to dynamic modeling, not depth!
About outsourcing
I have in my career several times had reasons to ponder upon the challenges with outsourcing. Once I was even given the role as a manager to build a small team to handle the gap between the just outsourced production for a product portfolio of the company and the development organization (for that product portfolio). It turned out to be something of a kamikaze mission since the production never ought to have been outsourced (at least not then) in the first place. The company thereby lost firm control of one of the most critical phases in the value chain for the products. Outsourcing can be right if it is done for the right reasons and in the right phase of the business life cycle. But quite often this is not the case. Costs are cut short-term, but quite often increase long-term. And a classic mistake is that the duties that are outsourced are "dropped" - given to someone else that you make a deal with, but then you turn your back to this. And this is quite often wrong, wrong, wrong!!! Successful outsourcing presupposes that you realize that you still need the competence to manage and steer whatever you have outsourced. So if you have 100 individuals working with something, you should often not outsource them all. You need (at least) the most senior or competent one to remain - so she/he has an ability to control the outsourced 99.
And the reason I discuss outsourcing here is that the same logic to a large extent holds when discussing whether we should have in-house competence or use external consultants or advisors. It is good that there is a mix of consultants and in-house competence working with simulation - but the proportion of consultants in this mix is too large. Companies ought to realize that to get real value out of simulation projects, you need someone in-house to be responsible - a "simulation director". If that individual uses in-house competence or consultants to carry out the projects is of lesser importance.
This problem is not specific for simulation, but quite generic I claim. If the customer lacks internal competence to receive the outputs delivered, even the best projects will have a hard time having a long-term positive impact. And this is bad, independent of whether you are in the consulting or customer end. To have a qualified client might be more demanding in one way, but often more rewarding in another. So an organization usually needs an "internal champion", that continuously can handle the internal struggles, ensuring that the values delivered by e.g. consultants survive on the internal battlefield of prioritization. Without this, a lot of projects - handled by consultants - could just as well have been canceled (even when they were competently handled by the consultants).
A Systems Office
Now I will conclude this organizational reasoning by sharing how I think this ought to be handled. I think many organizations would gain from having what might be called a Systems office. This unit would consist of individuals that are defined by wide-spanning competence - so generalists, not specialists. Given what we know related to the value of dynamic modeling competence, this would of course be included. But it would be combined with individuals where the competence rather is in change management, business/organizational development, operational effectiveness, and strategy. It would be a type of a "think tank", supporting the whole organization - but also the "right-hand competence center" for whoever has the overall responsibility. Thereby the role would be both supporting and "steering", in a delicate mixture.
A comparison can be made with a Program office or a Project office, where you gather competence and roles involved in project and change management. But these offices have a more temporary time scope - and focus on the How related to change and development initiatives. The System office - in my view - should have a more permanent mindset, and focus on Why, Which (change project to decide on, the decision to make, ...), and What (effects will the change have on the whole business, short- and long-term). It would be a team of professional Devil's advocates, trying to contribute by always having the system perspective in mind. The unit might support one function (e.g. Operations, looking at a new production line), but will also contribute with wisdom and conclusions related to how this will affect other functions and perspectives. So they will support both Operations and the whole business at the same time - which is possible since the unit is not placed under any function. It is free-floating!
For me, this organizational construct has the potential to handle the gap we find in many organizations - the gap between the strategic and operational perspectives, the gap between different functional perspectives (often partly in conflict with each other), the gap between different processes in the organization. In a way, I have already tried this in the county of Stockholm. I offered to enter the county after the interest and success I had with the hospital model I mentioned above. I said that if a new unit - a "competence center" - was formed, I could as a manager form, build, and lead it. I called it "Systems Thinking and Simulation" - but today I would have called it Systems Office.
My experiences also give me a reason to share some words of warning. Having the type of competence we have - mastering the mysteries and opportunities with dynamic modeling and simulation - means that we have an understanding of decision-making, pros and cons given various scenarios, the whole system, etc. that very few have. If we use that competence lower down in the organization we can contribute less, but we are usually left alone - adding some value here and there, that the decision-makers sometimes consider, sometimes not. If we move this competence up to the top of the hierarchy - where we can add the real value (given that what I claim is correct) - we also immediately risk being seen as and treated as a threat! In a way, it is natural, because if there is a lot to change and improve, the reason is - at least to some extent - that the current management is to blame. Having the ability to help out therefore threatens.
So in my case, I was really blocked and invisibly stabbed in the back from day 1, where my demands to even join the organization was postponed and ignored. The bureaucracies inside the public sector are very much formed and influenced by power games, manipulation, and "politics". The small team I slowly was able to build certainly contributed, but only marginally given the potential and need.
So simulation-related competence should be put high up in an organization, given my reasoning here. It is only then we will unlock the real potential and possibility to do good. But having said that, be sure to wear body armor - possibly made of titanium - because you risk entering a battle zone. Ensuring mandate, a stable platform, and top management support are crucial!
Wrapping it all up
Puuiii! This post became quite long and if you also have read my other three posts here at The AnyLogic Modeler I am impressed. You might agree with some of my insights, views, and opinions - but perhaps not all. That is good! If two individuals agree on everything, one is superfluous ...! And I would hate to be that.
I hope my experiences and reasoning have added some value. I certainly enjoyed sharing and I thank The AnyLogic Modeler and Jaco-Ben Vosloo for the opportunity. With these posts, I have shared my views related to the wonderful world of simulation, my philosophy. I will end by again showing you how I sometimes describe this competence field (also found on a slide from my 2nd post):
Simulation modeling is the art of creating a dynamic picture of a real-world system and its challenges, in some form. By letting it “live a virtual life”, we can – by the possibility to better understand the consequences over time and visualize both issue, course of events, and logic – add enormous value (compared to the alternatives) in both qualitative and quantitative terms!!!
To summarize what I have addressed and tried to claim in this post and the other three is really impossible - so, therefore, I will try:
Widen the scope of how you look upon simulation drastically! It can be much more than what you most likely see it as currently.
The less input you have, the more relative value a dynamic model might have to add!!!
Simulation is not just (or even primarily) about generating results - and if it is, the results are almost entirely relative, not absolute (because the future will always remain unsure - even though simulation can help us clear the fog better than probably any other alternative). Simulation is about supporting change and decision-making. Simulation creates the possibility to reflect and discuss, given the iterative nature and visualization (provided this opportunity is used to support this). It stimulates thinking and focuses on the core of an issue - the system.
Given the above, an improved level of understanding is a key output - or can be, if we use the full potential. For the stakeholders, to make better decisions. For the modelers, developing their own competence (not just related to modeling). For the pupils or students, if used in lecturing, since thinking and reflection can be stimulated (if the teacher uses the potential of a good model in the right way).
To succeed in a modeling and simulation project - especially if you widen the perspective and leave the "safe cliff" (referring to my second blog post) - it is quite often vital to realize that the gap between the "modeling world" and the "real world" is huge. Therefore there is a high risk of failure if the only parties involved are a manager/customer representative (most likely with a more or less nonexistent understanding of modeling) and a "modeler" (without any real understanding of management, decision-making, and the recipient's perspectives). A third role is needed - a "bridger" or a "gap filler" - where the modeling end must be in charge of securing this!
And to even get close to unveiling this vast potential, we must realize that the organizational challenge is key. The ability to address "all" challenges related to decision-making and the future demands a platform that has the mandate to contribute widely. To get that platform, we must realize that competence related to dynamic modeling and simulation must be integrated into the organization - on the right level - to a larger extent. Not as today, primarily separated and handled through a consulting and "outsourced" platform. Simulation is about management - but management can contribute on all "levels", more operationally and functionally or more strategically, spanning over the whole scope of an organization!
Cheers!
Stefan
Stefan Bengtsson is a guest writer for the AnyLogic Modeler. Feel free to connect with him over LinkedIn.
What next?
If you liked this post, you are welcome to read more posts by following the links above to similar posts. Why not subscribe to our blog or follow us on any of the social media accounts for future updates. The links are in the Menu bar at the top, or the footer at the bottom. You can also join the mobile app here!
If you want to contact us for some advice, maybe a potential partnership or project or just to say "Hi!", feel free to get in touch here and we will get back to you soon!
Comentários