Podcast 002 – Hillary Sillitto – Architecting Systems: Concepts, Principles and Practice

Architecting Systems: Concepts, Principles and Practice.

Architecting Systems: Concepts, Principles and Practice.

Show notes

Interview with Hillary Sillitto author of “Architecting Systems: Concepts, Principles and Practice”
Check out Hillary’s website here to learn more about the book: http://sillittoenterprises.com/

Links to things mentioned in the interview:

Time stamps:

0:01:41Hillary Self Intro
0:02:20His Mission
0:05:05Identification of Definitions
0:09:15Human side of systems engineering
0:10:25How to sell the idea of systems to people?
0:14:20Identifying obstacles
0:17:26Hillary transition to system engineer.
0:21:05Culture differences in organizations
0:24:15Differences of Industries and Civil Service
0:27:17Proud moments and Failures to learn from
0:31:37How to develop your career?
0:34:00Bad advice to ignore
0:37:40Technical specialization to broader systems
0:40:42Nature of requirements
0:47:00Key books to read
0:49:26Decision map
0:53:15Biggest opportunities and challenges in the system engineer community
0:57:49Advice to the system community


Joshua: Hi Hillary. The purpose of this interview is to offer guidance and encouragement to those interested in careers in the designing, making, and maintenance of systems from people who’ve already had a successful career in the industry. And yourself as a holder of the INCOSE ESEP qualification and with teaching positions at Universities of Bristol and Strathclyde in systems and being the author of a book on architecting systems, I think you’re certainly qualified as someone successful in the industry. Could you introduce yourself and explain any mission you’re on right now?

Hillary: Okay. Where do we start? I live in Scotland. I’ve worked in or from Scotland for my entire career. I started life as a physicist. I became an optical engineer, I became the head of systems engineering in Thales organization in the UK. I’ve been a member of INCOSE, International Council of Systems Engineering, for about 20 years as well as being an ESEP I’m also a fellow and I’m also a member of the Omega Alpha Association, which is an honor society for systems engineering and I think it has about 50 or 60 members at the moment. 

Do I have a mission right now? I’m leading a team of INCOSE fellows who were asked to develop a new definition of systems engineering for INCOSE and we’ve also been looking very hard at definitions of system and the different beliefs, different people have about what a system is. This has actually led to four papers and a panel proposal that are gonna be published in the forthcoming international symposium in 2018 in Washington. We put another two papers out last year, one in the systems engineering journal and one in the proceedings of the 2017 international symposium. And this is all about what a system is, why different people have different beliefs about what a system is, how that affects their approach to systems engineering and systems science. So the fundamental value or benefit for what we’re trying to generate is to get people to realize that not everybody thinks the same as them and that we need to respect the diversity in the field in order to get people working effectively together. So that is one mission. 

Another mission, I guess, is based on the book that you mentioned, Architecting Systems, which is now the basis of a training course that I do every summer in the Swedish archipelago. And I guess the essence of this is to again, it’s about communicating the ideas of what systems are and what’s important about them and also in terms of systems architecting, make sure that people spend time looking at the problem space before diving into the solution and make sure that people consider the logical or functional view as well as the physical one. A lot of people whose role is systems architecting tend to dive straight into and assume physical solution and that’s not actually an architecture, that’s a design. So I guess all of my mission really is about getting people to think more clearly about what they’re doing, about what the system their interested in is, how it’s gonna work and then understand the set of different views you need to develop when you’re trying to move from an initial concept to a successful delivered operational system.

Joshua: That’s very fascinating. I didn’t know about this work on the definitions. I think that’s very needed. 

Hillary: Yeah Systems Engineering Journal May 2017, Dori and Sillito is one of the papers. The other one is by me and about eight other people, the Adelaide Conference and in fact I’m doing a webinar to INCOSE in April on this, on what is a system. 

Joshua: I will watch that webinar. What sparks the need for the identification of needing to do this work on the definitions?

Hillary: If you go back historically to the 90’s when INCOSE was formed … Well NCOSE the National Council on Systems Engineering and then became INCOSE, the International Council on Systems Engineering a few years later, a lot of the impetuous was from industry and the DoD to get better control over the very large technologically complex systems that they were trying to develop. And that had, I guess, a lot of benefits from an industrial point of view in terms of better formalizing how you need to approach putting complex technological systems together.

There is a very strong process view, which is the opinion of myself and others, diverted attention from the fundamentals, like what systems are and how they’ll actually work and there was also a focus on the technological part of the system of interest and perhaps, in some people’s view, not enough attention paid to what happens when you put a complex technological system into a social system creating a complex socio-technical system. A lot of the problems we have nowadays in systems engineering, both in getting systems used the way we envisioned they’d be used and also actually getting the development process properly managed and directed, are on the social side of the socio-technical divide. 

And the other thing is if you look at the definitions the INCOSE publishes for system and systems engineering, they tend to be focused on man made artificial, technological, and to some extent societal systems. Increasingly the system problems we’re getting now are interference between human made systems and the natural environment, climate change being a very obvious example, but there’s lots of others, pollution and unsustainable use of resources. So expanding the definition of system to include naturally occurring systems, getting greater clarity on the fact that you get systems in the pure information space as well as the physical space and what that means. For example two information objects cannot interact with each other, except when mediated by some sort of physical processor, whether it’s a brain or a computer. 

And from a pragmatic point of view that might not sound important, from a formal point of view as we become more dependent on complex models in systems engineering and understanding the relationship between a model, a bit of computer code data or information, and the physical matter energy sub-systems that process that information and cause interaction between information and the real world, if you don’t understand that set of relationships then you’re not gonna be very good at designing useful systems. 

And the other thing is, there’s, as you said earlier, there’s a wide spread belief that either systems engineering or a lot of the approaches and principles embedded in systems engineering are of wider utility in big complex societal problems and there’s an aspiration among the systems engineering community to see our skills more widely used, but in order to achieve that then we have to move away from this idea that you start off with requirements and run a systems engineering process and take a much more holistic view of the problem, the problem space, how that interacts with different stakeholder interests and concerns and then if you’re gonna develop a new system how that is gonna interact with the existing problem’s situation and are you gonna get what you want out of that set of interactions. 

Joshua: Yes. I can certainly … Several people have mentioned to me that sort of this human side of systems engineering has perhaps been neglected for too long and maybe the profile of the type of people who enter the systems engineering profession tend to physicists or mechanical engineers or electrical engineers where it’s easy from training the focus has been sort of technical things rather than the human side. 

Hillary: It’s interesting there is a perception of physics as being very technical focused and reductionist it is actually, … if you read the latest papers or quite a lot of the articles in Physics World, which is the Institute of Physics house journal, are about complex systems, complex emergent behavior, and the challenges that people have as physicists trying to influence the world. There is a huge philosophical strand of all this going all the way back to the enlightenment, the second half of the 18th century about evidence based science and improving society by the use of approaches based on objective evidence rather than belief. That whole enlightenment project seems under threat with some political developments that are happening now. But that’s probably a little bit too far to go for this particular conversation. 

Joshua: A question on that space then, you’re obviously someone who is convinced of systems thinking of these sort of approaches as being valuable. How do you go about convincing other stakeholders, who either aren’t convinced or just don’t have an opinion and you would like to bring them on board?

Hillary: The don’t have an opinion is relatively easy. When the not convinced and believe that they’re right, that’s a lot more difficult, so there is … The first problem is that as with any serious attempt to predict the future in systems engineering says, do this things will be better. There is an element of luck involved or agency if you like in how any of these things turn out. So everybody can point to a project that succeeded in spite of not following good practice and everybody can point to a project that failed even if it did follow good practice, so you have to start off with a premise that if the statistical evidence is in your favor that’s the right thing to do even if you can’t guarantee to get the result that you want. 

And we see this kind of reluctance to follow the statistics and I guess in everything from buying a lottery ticket in spite of the overwhelming evidence that you won’t win anything to ignoring the science of climate change if it doesn’t suit your political agenda to we don’t have time to do this right, unspoken subtext, but we will have time to do it a second or third time if the first time fails. The other issue is that the current systems engineering model, which is very heavily based on analyzing a problem, documenting requirements, getting sign off on those requirements, and building to those requirements, that is if you like a ballistic model in that the outcome will be determined if you have a strong enough process by the initial conditions. 

Now in the modern world the problems are changing quite quickly. In a complex system the full set of requirements is unknowable, possibly ever and certainly until you start using the real intervention system to change the real problem system, so there’s an awful lot of stuff in the systems engineering literature about spiral models, about evolutionary development, about the incremental commitment spiral model, which I think is a very good articulation of all that, which the definitions haven’t yet caught up with. So this idea that systems engineering needs to become a continuous learning process rather than a deterministic follow the plan and everything will be alright process, is something that I think the, if you like, the received wisdom is still catching up with reality. 

And I always find it very important to try to understand the problem kind of from first principles what is an appropriate approach, which bits of the systems engineering tool kit are going to be most effective in the situation we’re in the moment. Do we understand where we’re trying to get to and what are the issues that may stop us getting there and how do we deal with them? Now the other problem you have selling that kind of approach is to somebody who believes will power will achieve anything, it sounds negative, there is a possibility of failure. No I will not accept possibility of failure and to that point it’s really difficult to have a sensible conversation. 

So I think the way you convince or sell people on the idea of systems is very much depends on their mindset. If they recognize that they’re in a high risk situation and want to use approaches that deal with those risks effectively, then you have to recognize the risks. If they say we’ve decided to do this, we’re not open to reason discussion, we’re going off in this direction, we will not have a plan B because that implies defeatism, then it’s quite difficult to apply good systems engineering in that mindset. 

Joshua: Yes, cause seems like the business world although they can’t really with any accuracy the probability of failure they do tend to accept that this new product may not sell very well and the whole process might just have been a learning experience to discover this product doesn’t work very well for our company, but yes a deterministic systems engineering approach, we are bringing to market a very complicated new product where technically it might be very hard to assemble and make working sounds a bit foolish if you’re not willing to accept this. 

Hillary: Yeah, there are different kinds of situations and of course systems engineering originated in situations where you weren’t gonna get a second chance. Space shots for example, … Well okay you can send a space shuttle up to fix the arbitrated motor in the Hubble telescope, but it’s gonna cost you a billion dollars. If anything goes wrong with the James Webb telescope on the far side of the moon, then that’s gonna be it. So in that situation the systems engineering has to be absolutely rigorous in terms of making absolutely sure that you’ve thought through all the possible scenarios and you’ve catered for all the possible eventualities that you wish the systems keep working. Obviously if the thing gets hit by a large asteroid, it’s gonna break up and there’s nothing you can do about it, but in terms of all the normal things that could happen and leave you with a recoverable situation, if it’s a space craft a very long way away then you’ve got to get all of that right before you launch it. 

If you’re launching a consumer product, as long it’s safe and as long as it’s legal, there’s a strong argument for getting the minimum viable product into the market as quickly as you possibly can and then iterating and getting feedback from that. Provided it’s not so bad that it destroys your reputation and I guess one rant might come to later is the fact we’ve given up on the possibility you can produce perfect software first time and that’s resulting in what I think is an unsustainable in the long term business of software enabled systems and we can come back to that. So yeah if you’re in the consumer business you can get a lot of evidence about the future of the system from what’s happening now and what’s happened in the past. 

If you’re in real time operations like running an airport, then you can see what happens day by day and hour by hour and you can update your model of how the system works very quickly. If you’re sending something, if you’re designing a nuclear submarine or ballistic missile or the Apollo space … The Apollo space program is actually a very good example of iterated development. Every launch built on what had happened before and checked out a few new things. So the approach you use to systems engineering, it’s all about using feedback from what you can do now to improve your confidence in that the next thing is gonna work. And depending on the nature of the problem, the best approach may be different. To use a rather silly analogy, you can’t cross the Grand Canyon in two jumps, but there’s a lot of other problems you can take several steps to solve. 

Joshua: And I remember in your book you used the term learning journey quite frequently and I thought that was a very strong message to me at least, where you’re talking about well we gotta go back and look at the problem again now that we’ve learned something about our potential solution because there’s sort of feedback and we can improve our solution by reevaluating the problem and I thought that was very nice. And so in the INCOSE handbook they have a large section on tailoring where you kind of read the whole handbook and they say you might not need all of this at all times.

Earlier you spoke about your various positions you’ve had over your career, would there be any sort of very important transitions you made, which sort of changed your thinking around systems or perhaps even got you into systems in the first place?

Hillary: Well I suppose the … Well one thing I suppose, is having spent almost all my life using … My early career was all about using computer aided lens design, optimization to optimize relatively simple system, but it’s a series of optical surfaces, all of which interact with all the other surfaces to create an overall image at the end and that, if you like, is a classic system problem. Okay it’s a closed problem in that you can actually define mathematics and you have got good enough mathematical models. You’ve got a very high degree of confidence that if the model says it’s gonna work, it’s gonna work and the other thing you have is a manufacturing process that’s very tightly coupled to the modeling and design process, so having designed it you can actually build it with high accuracy and that’s relatively unusual in other system endeavors, other than I suppose software and electronic circuitry where you have got very good simulation tools that are allowed that give you high confidence that if you model it and it works then the whole thing will work.

And of course the optical problem in a sense is simple in that you have fewer variables. I suppose that’s given me a view, that in principle if you can model the problem properly you can solve it, but also an appreciation that if you can’t model it then you have to use a completely different, almost a different philosophical approach. You can be very sure that the next ray of light that goes through your lens will follow all obay Maxwell’s equations and Snell’s law the same way the last one did, you can not be sure that the next person who uses the system you’re designing is going to think the same way as the last person that used it and therefore will get the same results out of it. So that, if you like, is coming from the physics end, although physics at the time was beginning to realize that not everything is deterministic even if you know all the equations and I think that’s a very important point. 

Complex systems are not deterministic even if the individual elements of them are. But I suppose moving from being an optical engineer and I think maybe the important overall thing here is I was lucky at that stage and that there was only a few of us doing the optical engineering for a wide range of products and small systems and that exposed you to a lot of different projects and lots of different kinds of problems in a very short period so that was really good for learning. And then the transition from optical engineer to systems engineer, I guess happened when I got involved in trying to get the company I worked for then into the infrared counter measures business, which is a completely new systems field based on understanding the optics that were involved and a very small group of us did that and the factory in Edinburgh where I used to work is now churning new systems out with a hundred, with a thousand. Then I moved to a different employer, which had a …

Considering they were exactly the same business and superficially it seemed the same kind of organization it took me a while to realize the culture was completely different whereas the first company I worked in there was very much of you that everything was a team effort, whereas in the second, the other company people tended to get given a problem and left to get on with it in a way that was not necessarily helpful with that problem was bigger than the individual was expected, reasonably expected to deal with. So it took a while to realize, as I say superficially it seemed everything was the same, then after a while I realized there were really fundamental differences, which is why different companies are good at different things even if they apparently have the same skill sets and same set of resources. The company’s an organism and it’s a complex system and it’s alright and everyone is different. 

Joshua: So how the team working was sort of organized, how you were distributing the engineering among the engineers. 

Hillary: I’m not sure if it was even organized. It was almost subliminal. Yes it was the assumptions about how tasks were allocated and how people were expected to work with each other. The first company I worked for if there was a major proposal then everybody would drop everything and muck in to get the thing out. And the second one if you went and found an opportunity to make a proposal for new work it was almost as if you were causing a problem, you were disrupting the smooth running of the organization. So I guess it was a less entrepreneurial organization, it’s been around for a lot longer and it felt as if … I’ll stop there. It felt different. 

After that I got involved … It was after I changed jobs that I got involved in INCOSE and that was quite helpful in terms of giving another perspective in what the company was trying to do because it was trying to move up from products and small systems to, it believed there was more money to be made in bigger systems or at the very least it believed that by playing at the next level you could influence the environment in which you were gonna sell your core products. And I think the later is probably actually a better representation of how things actually turned out. And then after a while I got involved in the company at corporate level and that was great in terms of spending quite a lot of time traveling out to France, different parts of the company in the UK. 

Then I applied for a temporary job in the MOD and was seconded into that for two and a half years, so I saw another perspective in the whole business, looking at the integration of all the different systems that the MOD was buying. And again seeing a different culture and how very strongly the culture influences what it’s possible to do from any position within the organization. And then back into Thales supplying insights, or more insights gained from working in the MOD to seeing how to get some of our projects working better. And then laterally in Thales corporate, which was actually less satisfying because at some point in your career you find that the organization you’re in launches another change program, you think actually I think I’ve had about enough of that so I took early retirement about four or five years ago. 

Since when I wrote my book, I’ve been doing a certain amount of work based on that book and I’ve just trying to close off this project we’ve been doing with the INCOSE fellows on what a system and what is systems engineering.

Joshua: So it sound like… my understanding as well you have these positions at the universities?

Hillary: I had a visiting professorship at Bristol for years, which is to do with setting up and running the sustainable systems course module and was given a visiting professorship at Strathclyde, which hasn’t yet led to very much, but they now have a masters, setting of a masters in system engineering by distance learning and we’ve having discussions about that now. 

Joshua: I think that’s quite incredible, in that you’ve had corporate work, government work, university work, and now you’re in consultancy work and it’s sounds like you identified that the cultures are very different, which changed what was possible to get done or what would even be sort of rewarded to get done. Is there anything, which in particular you’d pull out when we span this sort of four different domains? 

Hillary: I guess the most enduring thought is different companies at different times in their development have different cultures even if they appear to be doing the same thing. So you can’t just assume if you move into a new corporate environment where everything looks the same and everything feels the same that everything is the same. Now moving from industry into civil service you expect a difference. One of the things that surprised me was the extent to which … Yes, In the civil service and certainly in the culture in the Ministry of Defense at the time and I think generally the culture of individual interaction was much more competitive than it is in industry. Industry is set up to get large numbers of people collaborating. The reward and promotion culture in the, and if you like industry tries to get the right people in the right place, I think that’s changed quite a lot in the last few years. It’s been much more of … 

There’s been very much a change of HR, human relations, professional management style, which is more to do with getting people into jobs and less to do with getting people into careers because when I started work it was very much, you employed, you recruited graduates and you kind of expected them to stay with you for long term career and graduates expected to get a long term career out of the same employer if you liked it. And I think that’s changed in the last 20 years, certainly in Britain to it’s much more of a, I guess more of a hire and fire culture, but in the military certainly it’s up or out. 

If you don’t make a certain promotion threshold than thank you very much, here’s your pension. And that to military officers that starts happening about 45 if you don’t get above major and 50 if you don’t get above colonel, so I went from an environment where broadly everybody thought the job was to make the company more successful to one in which everybody thought and our career would benefit if you did, to one where everybody thought their job was to further their career possibly at the expense of the other people in the meeting. And that I find quite disturbing. Cause obviously some people thrive in competition and some do not. 

Joshua: Yes. 

Hillary: So you need to understand what kind of person you are in order to work out where you will be comfortable and where you’ll be able to be effective. 

Joshua: Which you recommend more people do something like yourself where you spend a couple of years in government particularly if you’re working on defense systems it sounds like you obtain a lot of valuable insight from when you return back into corporate. 

Hillary: Yeah, funny enough you can always use the insight because everybody thinks they know better. And I think one of the things that surprised me is the corporate organization I came back into seems to be more like the defense organization than it had when I left and I think possibly we had been recruiting more people, more recently retired military guys and civil service guys into the company so it was bizarre. It felt as if the organization was shifting to become more like its customer, which is not necessarily a bad thing, but it was certainly not what I expected. 

Joshua: So across that sort of career, what would be the things you’d be most proud of or some important failures that you learned a lot from. 

Hillary: I guess most proud of getting what is now Leonardo into infrared counter measures business. A very small team of colleagues who I really enjoyed working with and then at the end of my career helping to steer a subsystem for that infrared counter measure system and infrared missile warner project that Thales is developing. We had some quite big problems in the development phase, but we actually managed to work through them by applying good systems engineering techniques based on the incremental commitments spiral model as it happens, but very much what does the evidence, what we’re trying to achieve, what do we need to do next. There is no point following a deterministic plan if most of the knowledge is white space, so it’s this business of understanding the current situation, understanding what you know, what you don’t know and then working to answer the question, what do we not know before you worry about the things that you think are blocking problems, cause they may turn out not to be blocking problems once you understand more about the rest of the problem. 

So it’s all about this learning journey and people tend, project managers are trained to apply deterministic plan and try to work through it and if the assumptions behind that plan don’t match reality then the plan will definitely fail and eventually we manage to get people to realize that there was a different way of doing things that worked better. 

Joshua: Yes. It’s almost like having some humility to say that your project plan isn’t … As you said having a deterministic project plan for a problem, which has uncertainty and it is not likely to do as it says. 

Hillary: Indeed. 

Joshua: Were there any sort of failures? I often find that failures are where I learn a lot.

Hillary: Well I was thinking about that question. There was probably one that is quite disconcerting. A long time ago it was something that I designed that I thought I designed it correctly. We didn’t have much of a peer review culture at that time, so I did the analysis, I thought I’d looked at it from all the different necessary perspectives. Much later I discovered there was a rather arcane problem in getting the system to work, which nobody understood and nobody gave me time to help to investigate because the electronics guys are fiddling with it now. And it was only about 30 years later that there’s one of the guys that worked in that project is in my sailing club that I was sort of thinking about it again and suddenly had a flash of insight that there was one thing I’d missed in checking the design and that would almost certainly have caused the problems we were observing. 

So I guess the lesson from that if you’d should institute a non-threatening culture of peer reviews so everybody discusses a design with somebody else, not in the spirit of criticism but in the spirit of making sure it’s gonna work properly. And if you don’t have that, then you should always try and look at the thing from several different perspectives as well as don’t trust the computer. Sketch it out and see if there any fundamental principles of the thing that will either help or give you problems. And in terms of people, there’s a classic thing in competitive organizations, which is never give away a valuable resource assuming you’ll be thanked and rewarded for it. Always make the deal before you give anything away. That’s a mistake I made twice. 

Once when I was playing at soldiers at university and the officer training corp of which I joined to get paid to learn the bagpipes. And the other time when I was working in civil service when I was asked to provide a resource for an important sort of transversal project, but I didn’t attach conditions to releasing the guy as a result, which we had problems later. So again this is understanding each of the organizations you’re in and understand what the unwritten rules are and make sure you’re not taken as a sucker for trying to help the wider world. 

Joshua: Yes. Well that’s very interesting. I think certainly the peer review. I know in software when I worked in software organizations, what’s seems to be great about there is you check in your new code, it can’t enter the release build until several engineers have read every single line and in my case handed it back to me with many changes, which were needed to be made. I think the great thing about software is it is line by line so you can see everything explicitly. With 3D designs there’s so much going on there and it’s difficult to know what to look at. 

Hillary: I guess another thing about how to develop your career, which I alluded to, which maybe make it explicit, ask why and look out from your bit of the system, because most people take a problem as a given and then dig down inside it. Now that’s essential for making sure that you’ve designed it right, what we’ve just been talking about, making sure the thing is actually gonna work properly. But in terms of developing your sort of systems muscles, understanding how it fits into the bigger picture of what’s happening the next level up, what comes into your bit of the system and where does it get to after it’s gone through. That’s how you learn about the system that your bit is part of. And not many people seem to be interested in that, so if you are then ask why, ask clarification questions when your given a problem statement. That, if you like, is a mark of the good systems engineer. 

Joshua: Yes. And I think you mentioned in your book when you identify where the real problem is and then make the system’s boundary go over that, it’s sort of you’re adding work up front, but you’ll be thankful for it towards the end when you won’t have that sort of problem hitting you later. 

Hillary: Yes. Of course, it depends what your success criteria are, but if the purpose, if the goal is a successful system then the system will not succeed if you have pushed out of the boundary’s something that needed to be inside it. 

Joshua: Yes. I hope most systems engineers are looking to make a successful system. I hope. So on the mentorship side, if you were giving advice to a novice in the field, either yourself starting out or someone today would there be anything in particular you would advise them?

Hillary: I think probably what we just did, I guess, is the two things.
One is look out from the system, ask why, ask for clarification, make sure you understand the problem before you set out to solve it and then when you think you’ve solved it, make sure you’ve got people that you can trust to give constructive comment, because everybody makes mistakes so the only way we stop having faulty designs is for different people to look at the same design from different perspectives. 

Joshua: Yes. That makes a lot of sense. You ever hear any sort of bad advice which you would ask people to ignore?

Hillary: Well I think the commonest advice that young folk get is to move into management as soon as possible and I think that’s, I certainly ignored that and don’t have any regrets. I guess the thing is you can make a successful career as either a detail technical expert or as a system’s person without having to do very much management. If you’re in the systems field you definitely need to understand the pressures that managers are under. You’ve got to understand about budgets, understand about work allocation. Quite often what you have to do is … There’s generally a rush in projects to build up the resources and getting people doing something even if it’s not the right thing and if you’re a senior systems architect you need quite a sophisticated discussion with the managers as to the rate of learning versus the rate of doing and when you need a large team and when you need a small team who are working intensively together to sort out the framework before it’s worth getting a bigger team on.

And that’s something that depending the culture is possible or incredibly difficult to do. So yeah I think as a systems engineer you need to have had some management responsibility, but I think the idea that you need to do the management career … If you want to do it, then do it, but if you want to do really interesting stuff then don’t assume you have to jump straight into management. I think you need to get the tick in the box that says you’ve done the basic management things, you know what it’s like to manage something. But if you do that on small projects and you can get that experience and then be a very valuable right hand, chief systems engineer or chief engineer working with a program manager and that combination is much more powerful than one person who tries to do it all because we all make mistakes and you’re far better working as a person in a two or three person senior team than trying to do it all yourself. 

Because if you try to do it all yourself you will not have time to do the systems work and the management work properly. You can do either, but you can’t do both at the same time cause they’re both more than full time jobs in a medium to big project. 

Joshua: Yes. It doesn’t scale if you then … We talk about these very large systems projects where it’s gonna be impossible for one individual to be managing them.

Hillary: Yeah. But then of course the different members of the team need enough overlap to understand what the other’s trying to do. If you’ve never tried to manage a team, you don’t understand why your program manager colleague is spending all his time either doing staff appraisals or simply juggling his budget and juggling the resources. Whereas if you’ve never tried to solve a difficult technical problem you don’t know why when you said you wanted the answer tomorrow the answer hasn’t come tomorrow. There are times when you need an answer tomorrow, do we launch or not, and anybody who wants to understand that kind of issue ought to read the stuff about the Challenger Shuttle disaster where there was a lot of management pressure to be met to launch the thing. 

There was evidence that says we’re not quite sure about this and when you plotted the evidence out the right way, it is very obvious that they were way outside launch perimeters, but nobody had the mental space to work out what the argument was that said no, the fact we’re really uncomfortable about launching, we can’t justify it. If we look at the data properly we can justify it and you absolutely should not launch. And that only came out in the inquiry afterwards. 

Joshua: Yeah. So you started as an optical engineer and then moved on as a systems engineer. How important do you think it is to have had a, lets say a very clear technical expertise, deep technical expertise in a component or a subsystem and then move out to broader systems in general?

Hillary: Well of course having come from that background I think it’s absolutely essential. I am of the generation that believes that background in a technical discipline is an essential credibility earner if you’re gonna be in a more senior post working with technical specialists. On the other hand the problem now is that technical specialization gets so deep and systems gets so broad that it’s really quite difficult to get that combination. You need to be able to talk with authority and credibility to know where people are coming from when they are detailed experts whether it’s a technical field or a sociological field or human factors or whatever. So I guess I’m not a great fan of the idea that you do a systems engineering degree and then you’re a systems engineer. But things change and I could be wrong. 

Joshua: Because I have heard of people saying well you take a bachelors degree in systems engineer, which I was a little surprised about. 

Hillary: Yeah I think you could do a bachelor’s degree level, something or other in systems engineering management and in other words in the process aspect of it, but that doesn’t help you understand how systems really work and what happens in complex systems when things start going wrong. So again I guess from that point of view coming back to where we started I’m worried if people think you become a systems engineer by answering a set of questions about the INCOSE systems engineering handbook and being a work package manager for a few years. 

Joshua: Yes. It’s been said to me that it becomes almost like an accounting job where your sort of managing some dependencies in a SysML model and maybe not thinking too hard. 

Hillary: Yeah that comes to the difference between systematic and systemic. You need somebody to do the systematic accounting work to make sure everything fits together, but the systemic thinking is what’s the big picture of what are the interactions we need to think about, what are the strategic decisions are gonna make the system work or not. Ensure the coherence and integrity of the system and those do not come out of SysML models, a SysML model merely describes the decision that has been made, it doesn’t make the decision for you. And there’s a heck of a difference between requirements management and requirements engineering and a lot of the systems engineering tools are out at the moment are so difficult to use and counterintuitive that if you’re gonna become a tool jockey then you spend all your time becoming expert in the tool and you’re not actually working at the strategic level of is this system gonna work. Are these the right requirements? I’ve got all these requirements linked to each other, but are they expressed correctly? Are they specifying the right things, are they expressing the right units and the tool doesn’t help you with that. 

Joshua: Yes. Is that, the nature of requirements, being inherently difficult to handle or is it maybe more effort to go into these tools that they would become better at facilitating that?

Hillary: Well there’s an argument the entire requirements tools being wrong, it’s essentially a reductionist. You’re working on a hypothesis if you break everything down to the smallest possible element. Construct those elements so they’ll all fit together. At some point what matters is how they fit together, not what they are as a theory of complex systems will tell you. So the advice on complex systems is to focus, certainly this is what the INCOSE complexity primer says, focus on scenarios and mission threads rather than thousand of detail requirements. And I totally agree with that because every extra requirement a tiny level is an extra constraint on the implementation and if you know what the design is going to be than you can write requirements around the design. But if you don’t know what the requirements are gonna be you need to be very careful not to preempt the designers freedom to come up with the best answer by putting in things you may not realize constrain them. But at the very least he’s got to pay attention to them. And if you do a very simple value added calculation … Suppose you have a million pounds, or Euros, or dollars to spend on a project and suppose you have ten requirements then that’s 100,000 pounds or dollars per requirement and spending a little bit of effort managing the requirements to make sure you got the right ten doesn’t sound too bad. Suppose you then decompose them to get 10,000 requirements, that gives you 100 pounds per requirements and you can barely manage your requirements for 100 pounds by the time you think the number of hours writing it and linking it to all the other ones. 

So it is absolutely true that you can only afford to manage a certain number of requirements per unit value of your project. So the entire philosophy that the right thing to do is to keep breaking requirements down will you get to some notional bottom is wrong because at some point you’re be into atoms and molecules and quirks and where should you stop. The place you stop is as soon as you can hand off the problem to the next level down. So as soon as you’ve tamed the problem well enough for an expert in the field to go in and solve it, you don’t need to give them any more requirements. 

Hillary: Now a lot of consultancy companies make lots of money out of taking a client’s problem and turning it into thousands of requirements, which the supplier then has to read and if the requirements are not well formulated they won’t get a clear understanding of the overall task, so they’re just delivering 1,000 requirements not delivering one integrated solution. So if you want an integrated solution you need to specify the problem in a way that competing suppliers can come up with good well integrated solutions. 

Joshua: Yes. It’s interesting, cause I remember I went to a conference once and it was not a systems engineering conference it was a naval architecture conference and this older professor told me, he said, … He doesn’t like systems engineers because he was once given a 10,000 requirements for a naval warship and as a sort of naval architect who views his life as one … So designing a ship is a beautiful activity, it will take me several years to read and understand all these. 

Hillary: Yup that’s right. What you can think about in a case of a ship is what there are certain or in the case of any product, once you’ve decided what kind of product it is, there are certain things you need to parametrize about the product. But that’s actually in the domain of the product expert, not the domain of the customer or the systems expert and I think I know which vessel that might have been. The aircraft carrier, I think we had about several hundred, 700 or 800, which that was too many cause a lot of them are meaningless. The Astute subs had well over 10,000 and a huge number of them were meaningless so yeah the … 

Joshua: I think it was a submarine. 

Hillary: Quite. I can assure you he was right in terms of the value of most of those requirements. So the entire requirement’s thing is a problem because actually it comes out of software, cause if you’re writing software from scratch effectively you decompose the requirements until you get what you call pseudo code and then basically you write a bit of real code based on pseudo code and that’s your software system. But that’s an assumption that you have a clean sheet and you can design anything that you can specify. Whereas in the physical … So that’s in the informational world, that’s actually true. In the physical world you can design things that there exists known design patterns for that you can discover design patters for, so I think systems architect and these two look a lot more at codifying defined design patterns from different domains and then basically once your solution development gets the point. 

For the aircraft carrier problem, we’ve decided we actually want an aircraft carrier and not anything else, then you effectively you have a pattern aircraft carrier or for a submarine and the set of things you have to specify in order to let the designer design them. Actually that’s how product line thinking works. You need this transformation step between customer needs and technical solution and that transformation set needs to happen at the highest level possible. The cheapest thing for the world is once you decided what to design, flip everything into design space making sure you’ve kept the traceability between your decisions in design space and your decisions in problem space. 

And it is easier to keep that traceability the fewer cross linkages there are in the mapping. So having got to the point of what I know the design is, you then specify it in terms of the design solution, not in terms of the customer need. And you manage the transformation being customer need in design solution at the right level, whatever that is and you switch into what’s an efficient design paradigm for that kind of product rather than what’s true reflection of all the customer needs statements because until you understand the cost of each need statement you can’t regard requirements as gospel. It is very easy to write requirements, impossible to satisfy. 

Joshua: Yes. Particularly when they start conflicting with each other. That’s fascinating. 

Hillary: Yeah. I want a spacecraft that will take me to Mars in ten days and it must cost no more than a million pounds. I can write that down.

Joshua: Yes. That’s very hard to satisfy.
Are there any sort of resources or books that you think people often overlook and they should read or some very key ones that people should re-read because they’re sort of classics?

Hillary: Well the key classics in the systems architecture field is Rechtin’s original book on systems architecting.
My book tries to give, if you like a completely different take, complementary not competitive that says you need to focus on certain things about systems and particularly how does this logical physical split work because that turns out to be … That’s a big blocker for a lot of people making good use of modern systems engineering tools. So effectively I wrote my book based on everything Rechtin wrote as a given, how do we map that into a modern systems engineering tools we’re trying to use and modern thinking about systems.
And another very interesting book that very few people have heard of, I’m reading at the moment is Robert Rosen’s Anticipatory Systems. It’s very expensive, but I think there may be PDF’s around that are not as expensive, I’m not sure.

But it’s about the notion that some kinds of system are reactive, providing input they react, others are anticipatory. In other words if a human sees a bear in the woods you think I’m gonna keep out of the way of that bear. It’s not because the bear has done something wrong to them, bad to them, it’s because they think sell something bad might happen if I don’t do something and if you like the pattern for a lot of 20th century engineered systems was pretty much they were reactive to effective user inputs. 21st century systems increasingly have to be anticipatory. They have to say well that situation I’m seeing now doesn’t look a good one, I’m gonna do something about it. So that’s a very fundamental book about the nature of systems, the nature of the modeling relation as he calls it between a model of a system and a real system and some of the maths involved in that. Not all of which … 

You don’t have to understand all the math to get value out of the book. On a completely different level, another book not many people have heard of is the Incremental Commitments Spiral Model by Barry Boehm and some of his school workers and that’s about, if you like a learning journey approach to a staged project life cycle model. And this is actually very much the approach we used in the end of the successful missile warning project I mentioned earlier. 

Joshua: Yeah cause I find in naval architecture the spiral model is very popular. That the V model is … When I shown that to naval architects, they haven’t seen it before. They showed a spiral and say this is how we do it. 

Hillary: Well I think naval architecture is very much … and to be honest most design and naval architectures is design is very much satisfying a problem with constraints and a lot of the constraints don’t become apparent until you’ve engaged with a problem. And so the spiral model is just a set of V’s, so there’s actually no difference between the spiral model and a series of V models, one right after the other. Having set that aside, a ship you’ve got volume constraints, you’ve got weight constraints, it’s got to float, it’s got to be stable, you’re juggling a lot of different functional entities within a very tightly constrained physical space envelope. And that means that you start off with some broad principles but then having worked out the general shape of the problem, and you’ve probably got two or three different fundamental forms you could use. So the synthesis actually involves trying lots of different solutions, starting with the big difficult constraints. 

How do I satisfy the big difficult constraints? Then when I’ve solutions for them, then how do the smaller less important things fit together and actually that’s an important general principle in a tightly constrained design space you have to satisfy the hardest problems first because they use up most of your free design variables. If you take the easiest variables first then you’ve used up most of your design variables before you’ve even worked out what the difficult problems are. So there’s something very fundamental about design and I describe that in terms of a series of V models. Cause if you look at one cycle of spiral model it’s exactly the same as a V model. But you have to think in terms of a series of, what I call in my book a decision map and we’ve got that better sorted out now that we’re going through several cycles of teaching it. 

What are the critical decisions you need to make first? How does that reshape the problem for the other decision you have to make and if you simply assume you have a single V model, start with requirements, design, end up with solution, there’s a lot of interaction between requirements that will not be obvious until you try to map those requirements onto a real physically constrained solution. Unless you’ve done it so often before that you’re simply changing the parameters of an existing solution and then as I said before you’re actually playing with design parameters, you’re not playing with customer needs. So the mapping between … 

There isn’t a continuous flow from customer needs down to solution requirements. It’s a disjoint step that says, in this space I’m thinking what I want, in this space I’m looking at juggling the art of the impossible. And that’s particularly true in product line engineering. If the customer cannot afford to pay an unlimited amount of money to get exactly what he or she wants, they have to say what’s the nearest you can get to this and product line thinking says I have a defined solution space and a set of parameters associated to that, that I have to map to the customer problem statement and I’ve got quite a lot of freedom to parametrize my solution within limits. And if the customer’s problem space maps into that solution space than I can give them a much better deal by using standard components and configure a way that is uniquely configured to his problem than starting with a clean sheet. But I lose flexibility but I gain time and I gain cost and I probably gain reliability as well if I’m using reliable proven components. 

Joshua: Yes. To me this idea of being able to compose out of things that we have already designed before, and we know how they function could be very powerful.

Hillary: Yeah it is very powerful, no “could be” about it.
But it does mean the customer has to set the constraints if the V model is unintentionally part of that problem that’s interesting. 

Joshua: Okay. What do you see as the biggest opportunities and challenges right now and moving into the future for the systems community and systems engineers in general?

Hillary: Well the whole … In my lifetime systems have changed from being control or sorry technological systems have controlled, have changed to be controlled by analogs circuits and discreet electronics to software at a level that would have been quite inconceivable to anybody born or to anybody who lived before about 1960. So the explosion in software is a huge opportunity, it’s a huge problem, and at the moment we can’t make good quality software so the notion that your mobile phone gets 20 new app updates every week, that might be sustainable at the moment, but it’s not gonna be sustainable in 20 years time if only because of the cyber security issues. 

Okay so first of all let’s look at software. 20 years ago or 30 years ago now the software community or the software industry went for functionality rather than quality and there’s a debate about whether that can ever be reset, but I think if we are to get reliable dependable software dependent systems the paradigm that software is inevitably buggy has to change. You can do sort of proof by induction that says, if I can write one line of code to be perfect that’s fine. If I can write two lines of code to work perfectly together that’s fine. Therefore, kind of by induction I can add one more line of code to a perfectly working software system and it will still work perfectly if I test the extra line properly. So that says by induction it is possible to make large scale “perfect” software systems if we find the biggest size of software component that we can make “perfect” and then look at the software system problems connecting together “perfect” bits of software and a rigorously correct way. 

But that costs an awful lot more than I can quote in getting stuff out quickly and until some changes it may well be product liability to do with autonomous vehicles that forces the change. It might be if somebody cracks a problem and gets a huge competitive advantage from producing perfect software when nobody else can. But so far people that have done that have not been able to more that competitive advantage outside their immediate business space. So that software.

Second challenge is everything getting more interconnected. People need to understand that very complex systems interacting with each other in ways that were not anticipated by the designer will not necessarily always perform intuitively and acceptable ways. We can’t stop market crashes, we will occasionally electricity drop outs. It’s surprising how few big electricity system drop outs we get when you think about it.

So the set of challenges to do with larger and larger and more interconnected systems that have cyber vulnerability, that have to be designed so there is no single, or to minimize the risk of cascading failure due to a failure in one zone. The renewable energy problem that says if we’re using wind turbines and the energy in intermittent we either need to do demand modification using smart meters and incentives to keep differential pricing or we need to have grid scale storage or both and we need to understand the trade offs between more energy production and more efficient consumption, better insulated homes, lots of obviously systemy things that the existing infrastructure networks, if you like, don’t have the internal models embedded within them to take into account, cause that’s the other component. 

And anticipatory systems contains a model about how it thinks the world work that allows it to make these anticipatory decisions and the implicit or explicit models inside the current electricity grid systems are not sophisticated enough to describe all the different ways the world is now starting to work. So there’s a huge system challenge about getting all that together. What else? There’s sustainability in the environment and the idea that the system challenge is to make things work better with less damage to the environment and fewer unintended consequences so that the world can sustain a population of nine billion or whatever it’s gonna be with the vast majority of people having reasonably comfortable and fulfilling lives, which is a huge challenge but we have no reason to believe it’s impossible. But it will be impossible if we reject everything we discovered in the enlightenment. 

Joshua: Yes. 

Hillary: Which is maybe the final rant. 

Joshua: Okay so yeah. I mean coming up to about time, so my final is there anything in particular that you’d like to say or explain or ask the community for or would you just, we could just close with the interview now?

Hillary: Yeah I think defect free software would be a big thing.

Understanding how to get an evidence based decision culture in organizations that don’t think you need evidence based decisions is perhaps the biggest challenge. And the easy thing, the individual systems engineer is to move into a benign environment where they feel they can be useful and listened to. For some of us the challenge maybe is to tackle some of the less benign environments and try to make them benign but that’s kind of a problem for the next generation I think. 

Joshua: If people would like to learn more about your work, I guess they go … Do you have a website?

Hillary: Yeah I’ve got a website, www.sillettoenterprises.com  If they look at the college publications website, they can see a description of my book and they can also buy a, they can now get a PDF copy at a very reasonable price. And if you google my name in the … If you’re an INCOSE member in the, I guess it’s the Wiley electronic publication’s website most of my published work is papers at INCOSE conferences and now Systems Engineering journal. So you can find out other stuff there and particularly the batch papers we’re putting out from this project and the 2018 INCOSE conference. That’s gonna be quite interesting. 

Joshua: Yes and if you’re an INCOSE member you can read all these papers for free. 

Hillary: Yup that’s right. 

Joshua: Well it was really good to speak to you today. To give us all this advice. Thank you very much for coming on. 

Hillary: Well it’s been a pleasure. I hope it’s useful and I hope it helps somebody.

Joshua: Great well it’s helped me personally so thank you very much.