One of my favorite DAWs is the DAW formerly known as Presonus Studio One (currently rebranded as Fender Studio Pro). Personally, I got my first professional licence back in version 4 (current version is 8) and it ticked the right boxes for me. Given that these days for most musicians there is no need to follow an “industry-standard” purchasing pattern (which usually meant either Pro Tools or Cubase) and given that I did not have had access to Apple hardware as a given (how times have changed!)Studio One presented me with arguably a super-fast and efficient workflow, at the very least on par with Logic Pro’s one (a DAW running only on Apple hardware). Given that my deciding factor is workflow speed and orthogonality, it is a clear winner and both these properties increased the workflow speed even further. Being orthogonal means that when you want to utilize new or previously unused functionality, you can correctly estimate how to utilize said functionality by what you already know. Given that I have a full time job, extra-curricular professional activities, a couple of music projects with professional musicians (one of them having a record deal), producing music for some quite prolific artists, a family (dog included), it can easily being inferred that my premium resource is time – anything that saves me time wins over any antiquated industry standard. I was not alone in this though, by the time version 7 rolled out, even in popular Greek musician/producer forums (which tend to be reactionary and stuck to the “good-old-days” mentality) quite a few folks were recommending Studio One as the DAW of choice. However, while Presonus had promised 4 updates per year, when they skipped the 3rd one, my spidey sense started tickling. When they missed the 4th, I started building up Logic Pro skills. Cut a long story short, Tuesday the 13th 2026, they released version 8, rebranded as Fender Studio Pro, asking folks to pay money for the honor of upgrading, introducing some more “bedroom producer friendly” workflows and jeopardizing the stability of the DAW (imagine doing a 15 minute audio take just for the DAW to crash at the 14:48 mark). Even with a competently executed marketing campaign, there was huge backlash inside the user community (this introductory paragraph was going to be way more scathing but given that Fender did listen up and released quickly an out of bounds patch compelled me to bottle the acid a bit). So effectively, for “reasons unknown” Fender introduced a positioning problem, a problem that could have been avoided in my opinion but I guess the latest and greatest C-Suite changes introduced said “reasons unknown”. So, while the DAW software is actually quite remarkable (and the lightning fast workflow never went away), positioning and perception do have changed and that got me thinking a bit.
Let’s start with the definitions of the terms that I will be using for the remainder of the article.
- Positioning is how an organization defines what a product is segment wise.
- Perception is how folks actually perceive the product.
- Typical product examples can be “Whatever-as-a-Service”, B2C or B2B, perhaps even sprinkled with a bit of “AI” these days.
- Product Security are the concerned efforts to ensure that users of product (as per the definition above) do not have to deal with any security artifacts compromising their security posture and that their security requirements are met.
In most organizations, Product Security does have both a positioning and a perception problem. For the remainder of the article, I will examine common structures that have been observed in the field. I will not not cover each structural combination that can be encountered, the world is a strange place and this would blow the scope of the article up to Encyclopedia Britannica size.
The very first and surprisingly common is the “Product Security? Where we go, we do not need Product Security. Ignoring security in this day and age is not just dumb, it is just plain irresponsible. Not only you will get attacked from the get go, the attacker profile has changed: gone are the days “hello we defaced your website, admin check index.html.bak”, this days are more like “we pwned you silently, attacked all your users via technical and non-technical attack avenues and when we were done, it is ransomware time””. Even if you have a bootstrapped team of folks carrying significant security knowledge, chances are they will be wearing 1001 (decimal, not binary) hats at any given time and the best possible outcome would be to keep track of security topics and not introducing any glaring holes on purpose. Chances are you are the former, not the latter so please be responsible. The good news is that both the private and the public sectors do require a form a demonstrable baseline these days, so it is not the Wild West it used to be (PCI-DSS is a prime example of an industry regulating itself before the governments do, recent EU directives are the opposite, governments stepping in and try to enforce a minimum baseline).
Another common pattern is security under IT (IT in this case being the department responsible for stuff like organizational access controls, emails, the printers etc). This pattern is common in organizations that have no engineering and by extension no product but do such organizations exist today? Product and users do not have to be external. In addition, IT, with the definition given above can be viewed as a silo and a cost center silo at that. Combine this with the relatively rare expertise combination and a pattern of sub-optimal security impact starts to form.
“Oh I know! Secure Software is quality software, there is a ton of relevant technical literature and this is a battle-tested, tried and true model that has been used in the past” I can hear from the back. Well, while I do have a soft spot for a lot of retro things, I am also happy that a lot of retro things have now been forgotten – Darwinian evolution perhaps? In this day and age, adopting a de facto reactive stance is a recipe for disaster in multiple dimensions. Not only addressing security topics late in the SDLC increases cost and stress significantly, it also negates the effects of cultivating a security culture, creating an even more reactionary stance (the infamous “return 4” piece of code from Microsoft’s Word death march comes in mind – given extreme deadlines, the software engineer has coded a “return 4” statement in the method that calculated the length of a line, assuming QA will catch it later) that penetrates the whole organization. So, maybe in the distant past that was a tolerated bad practice, nobody knew any better but today? No, just no (click the link for the one-liner rebuttal if someone suggests that).
“Well, I get what you are saying. Product Security is a subset of security engineering discipline after all so what better place to put it than in Engineering”. While an obvious improvement than the previous monstrosities, this approach too leaves something to be desired, even if that raises the odd eyebrow. There are two ways of embedding product security into engineering – either sticking it under a specific team or product security being its own team. Making product security part of a larger team, usually one that is dealing with different NFRs can introduce communication problems with engineering teams dealing purely with FRs but this is not the whole picture. Even with standalone product security teams, the silo here is Engineering. Siloing one way or another Product Security within engineering can lead to a few, predictable anti-patterns. From an external viewpoint, the days where folks rpc.cmsd’ed their way into Solaris boxes are dead – countermeasures have progressed and while critical RCE have been, are and will continue to surfacing, they are not the only way someone can compromise your security posture. Attackers (who have gotten more malicious) never, ever play by the “attacker’s handbook” – between hitting you with a 0-day (or even a 1-day) or faking their way into your organization via say Support (Atlassian and Okta breaches come to mind – look up the postmortems, they are a fun reading), the path of least effort and resource cost will be picked almost every single time (unless they are into Skyline Queries but this is a different, more APT-like pattern). So, you can have tough-as-nails, demonstrable engineering security and still get your butt handed to you from the attackers. But the anti-patterns do not stop there. Silo-ing security strictly within engineering not only limits visibility into the overall security posture but introduces both a positioning and a perception problem. If Product Security is positioned and perceived as purely an engineering function, not only relevant knowledge that can be utilized in non-engineering verticals can be lost or downgraded but business decisions can and will (perhaps deliberately) skip security input, leading to a lot of pain and friction later. So, definitely an improvement but still no, cigar.
“Well, let’s align product security closely with GRC, maybe even placing Product Security under GRC” I heard once more from the back of the room. I also see the engineering brigade rolling their eyes in an “anything but THAT!” notion. This setup usually gets a bad rap but, under the right circumstances is an improvement over all of the above. The key skills here lie more in the domain of the so-called “soft-skills” than hard-core engineering. GRC and Product Security require two distinct, complementary skillsets. Something that I learned really, really early in my career is that you do not introduce security controls to “be kind to your neighbor” (I am actually quoting my skip at the time here) – there must be a proper benefit of doing so. After all, engineering is based on prioritizing the right things and addressing them by doing the right trade-offs. Effectively, this can be considered as a cost/risk/benefit analysis. Combining folks with different skillsets and value systems can be tricky though: all sides must be able to express in terms the other party is able to understand about what is needed, why and how it can be achieved. In my career, it was really seldom the case that such a scenario was encountered by me (perhaps I am unlucky? Click like and subscribe and comment below – oh wait, wrong medium!) – I have encountered it but more often I have encountered folks not willing to engage in meaningful dialogue. When a meaningful dialogue is not taking place then it is a coin’s toss between the other party’s stance: will it be a default-allow stance “whatever, do whatever you want” or default-deny “whatever, we are not doing that as it does not affect our compliance posture”. The default-deny stance is way worse that default-allow – at least in the default-allow you are improving the security posture, perhaps lacking the clear benefits of the cost/risk/benefit analysis above. In the default-deny scenario you negated an improvement using one of the most lame reasons in existence, hurting the organization so please don’t do it, mkay (and that assuming ignorance not being LIDL’s version of Machiavelli)? On the other side of the fence, if you are not able to communicate a complex technical topic with seemingly equivalent options, one being the proper one and the other one being a catastrophe to an audience that has a different skill set, either you have a severe case of the SNAFU Principle (so the whole endeavor is doomed from the get-go) or you need to work in your understanding of the topic.
A solution that attempts to mitigate the anti-patterns above is having a separate S&P (Security and Privacy) entity within an organization. This mitigates the risks outlined above but this does not mean it is an easy solution. On the plus side, being independent means that the whole organization can benefit from it and you can achieve some pretty good alignment between seemingly not-closely-related verticals. Upon reaching a certain level of maturity, the proactiveness will be infused across the whole organization, achieving a synergistic effect. Furthermore, combining multiple disciplines together can help leaving no proverbial stone unturned and once everything clicks into gear, the initial high upfront cost not only is covered, it is literally eclipsed by the benefits. However, in practice one should be aware that this is a Herculean effort to get going – even choosing the path of virtue is not an easy choice. Additionally, in order for this to work it does require significant investment and buy-in. As Al Capone eloquently put it “you get much further with a good word and a gun than with a good word alone”. As I have stated in multiple forums before, influence is a great thing but push comes to shove, you do have to be able to shove. This is not feasible without C-Suite buy-in for when things turn ugly. Staffing such a team with competent professionals across the board is not trivial as well – be prepared to look for them, keep looking and then look some more. When found, be prepared to both pay a premium and offer a compelling story to them. “But is this ideal scenario even feasible” one can ask? Well, as much as I hate cargo cult practices, it works for Big Tech, so if you are going to cargo cult one thing, make this one, please. Pretty please with sugar on top. Your users will be thankful. Your bottom line will be even more.