If anything it's the incentive system in software industry, which is at fault.
1. No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.
2. No engineer is paid extra for keeping the codebase without growing too much. It's re-writes and the effort he puts in to churn out more solutions (than there are problems) that offers him a chance to climb the ladder.
3. No product manager can put "Made the product more stable and usable" in their resume. It's all the new extra features that they thought out, which will earn them reputation.
4. No manager is rewarded for how lean a team they manage and how they get things done with a tiny & flat team. Managers pride themselves with how many people work under them and how tall in the hierarchy they are.
Our industry thrives on producing more solutions than needed. Efforts are rewarded based on conventional measurements, without thinking through- in what directions were the efforts pointed at.
Unless the incentives of everyone involved are aligned with what's actually needed, we'll continue solving imaginary problems, I guess.
> “No designer is given promotion for sticking to conventional designs. It's their creative & clever designs that get them attention and career incentives.”
This is a massive change from my first software industry job in 1997.
I was essentially a “design intern who knows HTML” on a team that built a shrinkwrap Windows application for enterprises. The core of the design team was a graphic designer, a cognitive scientist, an industrial designer, and a product manager with industry experience in the customer domain.
The application was Windows native. User research was conducted on site with scientific rigor. Adhering to platform guidelines and conventions was a priority. I think I spent a few weeks redrawing hundreds of toolbar icons so they’d be in line with the Office 97 look (the kind of boring job you give to the junior). If the app stood out on the Windows desktop, that would have been considered problematic.
Today a similar design team would only have a graphic designer and a PM, and neither of them would care the slightest about platform guidelines or customer domain. The UI is primarily an extension of the corporate brand. Hiring a cognitive scientist? Forget about it…
Everything certainly wasn’t perfect in the Windows/Mac desktop golden era. But the rise of the web wiped out a lot of good industry practices too.
From a person who started using computers from the early 2000s era:
THANK YOU!
None of the current SaaS apps I use can come close to the experience of using softwares from that era.
Take a simple list view of a typical Windows/Mac software?
1. Command clicking selected multiple objects
2. Shift clicking selected a range.
3. Right clicking brought up selection actions.
4. Double clicking opened an object.
This pattern was followed in almost list views and there was no re-learning and surprises.
Now can you say the same about the list views of modern web apps?
Can you apply the same list selection experience across Google Drive and Microsoft OneDrive and Apple iCloud? Nope.
That's were we failed as an industry. We let a lot of designers run too wild with their ideas, to put it bluntly.
Also:
CTRL+A selects all
CTRL+Shift+End selects all from where you are to the end
CTRL+Shift+Home selects all from where you are to the top
One of the (many) problems of web UIs is they often ignore the keyboard completely.
Emacs would like to have a word.