Rogue + Wolf’s Journey: Fashion, Technology, and Ethics in the Age of AI
You might have noticed the recent explosion of Artificial Intelligence (AI) software and AI-enabled tools. The latest AI technologies are seeing massive and rapid adoption across all industries.
You might also have noticed the long threads of comments in our social media that mostly revolve around the demand to “stop using AI”. We have noticed for sure. 😅
We generally prefer to stay in our cave and design but we believe this situation demands a public conversation. Grab some cold brew coffee or a fine chamomile-lavender infusion and sit back for a relaxed read.
Our Passion
We, Eloise and Michael, started Rogue + Wolf in 2012 from our living room with a crazy vision: let’s use 3D printing to manufacture products people enjoy, sidestepping traditional manufacturing. 3D printing had just become available to consumers at scale through companies like Shapeways and we thought: “surely we can use it for full manufacturing of retail-ready products... right?”. Wild idea at the time but we took a leap of faith! Long story short, we decided that jewellery was the only (barely) viable product we could personally enjoy making. Fast forward a few years and we were in disbelief as thousands of people were enjoying our black nylon, 3D printed jewellery that seemed like wild fantasy at conception.
That was the start of Rogue + Wolf but our story should start even further back. The two of us have been a couple IRL for 20+ years. At around 2005 we came across the work of futurist and inventor Ray Kurzweil, well summarised in his book The Singularity Is Near. We were techno-optimists and forward thinking since young, before we even met, but reading about the possible future and how soon this could arrive blew our minds. We believed and agreed with Ray’s analysis and kept our eyes open for early corroborating evidence.
3D printing was our first intimate contact with future technologies, our promised sci-fi future. And we dove right in, knowing that we were possibly too early and 3D printing was not quite ready for mass adoption yet. We think that gamble paid off and even today, 12 years later, it’s still kind of early for mass adoption of 3D printing. But early adoption is in the core of Rogue + Wolf, it’s our lifeblood.
We firmly believed that AI is coming in the 2020s, even though it felt like a crazy idea in the early 2000s. Around 80% of related scientists and researchers at the time believed that human level AI would take 100 years to develop. And here we are! 20 years of uncertainty and anticipation later, human level computer cognition is finally here. We dove right in, literally vibrating with excitement.
The Potential of AI
Artificial Intelligence means achieving thinking and learning on a non-biological substrate, e.g. a computer. This ability has finally been achieved, the latest AI techniques can match or outperform human level skill on a massive range of specific tasks, especially around written word and visual composition. AI is also far, far beyond human skill level in complex scientific tasks like drug discovery and protein folding.
Humanity now has human-level competence on tap. It’s hard to compare this development with any past technological advancements, maybe the development of language was the last comparable event in magnitude and importance. Intelligence is the most powerful force in the universe and humanity has just taken a big step towards mastering it.
We are on the cusp of unlocking abundance in most areas of life, from food, to education, to health, to art.
The Dangers of AI
We can see the frustration, worry, and even fear about AI technologies out there. In all honesty, we expected some pushback but were still surprised by the speed and intensity, since we’ve always seen this new technology as positive and anticipating it for decades. Of course there are always risks bundled with progress. If humanity did not push through the risks though, we would still mostly be subsistence farmers like our great grandfathers.
Humanity of course will have to work towards a compassionate application of new technologies as always. But we do not consider non-adoption a real option. Any countries that stifle progress in the field of AI will fade to insignificance very fast and it’s pretty clear that state leaders recognise this already.
It’s not that worries about AI are not real, there are even existential risks involved, it’s that the only viable solution is to work through them. “Stop using AI” is one of the worst approaches to the problem because then we allow someone else to decide how AI will be used. One of the worst outcomes for a population can be to allow a state actor with antithetical world views to be 100 times more advanced, becoming our de facto hegemon. We should deploy AI faster and more compassionately than those who would not.
A low risk, low societal repercussions industry like fashion is the least of humanity’s worries. An independent tiny design team of 5 people using AI technologies to the best of their abilities is an insignificant concern. We would argue that this should actually be promoted. We are worried about our relevance in the broader economy as well and we know that every major company in the world is deploying AI as fast as they can manage. We believe we can do it in our own unique way though and there can be value for society in this.
Our Thoughts
We can share our point of view about AI as designers and fellow humans and we will do just that in future posts. But we are talking about monumental, society shifting, once-in-history type of events here. The ethical implications are important and we believe it would be better if you read the writings of the best philosophers, technologists, and sociologists that humanity has to offer, rather than the views of a couple of designers.
We’ll share our point of view and experience in any case, for whomever these might be useful to. Many people insist that we explain ourselves anyway, so might as well we do. We are in favour of embedding AI in every appropriate human activity, in the most compassionate way possible. We don’t believe non-adoption is an option.
We can attempt to explain our stance over a few future posts. Writing thoughtful posts can possibly work better than answering to individual comments or emails. And as much as we’d love to chat with everyone, all 5 of us would be only answering comments all day if we tried that.
On that note, long arguments in the Facebook and Instagram comments improve the reach of our account because social media algorithms like and promote drama and that sort of thing. However we would prefer that everyone has thoughtful and deep discussions about AI and how humanity adopts it, rather than heated arguments on social so we will avoid playing the algorithms.
Coming Up
We’ll aim to write about a post per week covering one of the massive amount of topics touched above. From design to ethics, from job losses to geopolitics, from how to use AI responsibly to how to avoid bringing the End Times, there’s a lot worth talking about. We will try to give priority to subjects that we see people are concerned more about.
The Bottom Line
We believe that we can all work towards a bright future of abundance and prosperity, technology is always the main human effort towards this goal, our way of overcoming ourselves.
We’re committed to deploying new technologies compassionately and taking ethical decisions in life and business, which is never easy or simple. Yes this is the path we’ve always walked, regardless of how rough or winding it is.
Please go out there and read the writings of people much smarter than us. But if you would like to hear the non-expert but specialised opinions of a small indie design team then do keep an eye on this blog, follow us on social, etc to come back for the next blog post.
With love and optimism,
Michael and Eloise
Founders and Directors of Rogue + Wolf
Edit 01/03/2024: we wrote the next blog post about job losses AI and Job Transformation.
Edit 08/03/2024: our next blog post about AI photography and the future of photography
Edit 01/03/2024: we wrote the next blog post about job losses AI and Job Transformation.
Edit 08/03/2024: our next blog post about AI photography and the future of photography
Edit 15/03/2024: next we talk about fashion photography standards and how AI photoshoots are not that different.
Edit 23/03/2024: we rote about some more reasons why we use AI.
Edit 29/03/2024: we wrote some examples from other industries to bring our AI adoption arguments into focus.
Edit 29/03/2024: we wrote some examples from other industries to bring our AI adoption arguments into focus.
I deeply regret buying your products after reading all this
I came across your brand on Instagram and was excited to buy something but as soon as I saw your stance on AI, I immediately lost all interest. Any “artist” or “creator” who endorses the use of AI art endorses theft from actual artists and creators who made all the actual art used to feed these engines. Think about the people you hurt by doing this and the customers you alienate thinking this is an adequate way to advertise your product, as well as making of the product itself. I appreciate you putting a comment section for this so that people can communicate the issues we have with your methods and ideology. It already makes you look horrible now, imagine how bad it will be when AI “art” dies off and isn’t the “it” thing anymore. Then what will you have? There are plenty of ways to create without stealing from artists, perhaps you could do that instead. The fact you can’t even seem to write your own blog posts and have to use ChatGPT for them just really shows how little you care about your entire brand. It’s embarrassing. All people see when you advertise use of AI is “oh okay another brand that has no original ideas and skimps the hard work of creating an actual product or piece of art, too broke to hire actual artists or too lacking in creativity to make their own. A company that is willing to lie to their customers about their products, cut corners and who doesn’t care about real artists”. AI art is not a tool, it’s offensive.
I wanted to take your call to discuss AI, as a software developer and someone who has used the current “AI” models a lot to try understand them, not as someone afraid of them.
Firstly, lets put it out there that “AI” in this context is a marketing buzzword. These software are not intelligent, they are machine learning algorithms and neural networks built to do what a computer is specifically good at: process large volumes of data. They are not AI in the sense of the word everyone has been familiar with for the last however many years AI has been a concept, because they cannot have thoughts of their own. They can only take the data they have and remix it, look at any current AI with a small dataset behind it. For the rest of this comment I’ll be using the word “AI” for ease of understanding but make no mistake, these are not AI.
Following from this, “human level computer cognition” is not here at all. If you trained a human to code in Python, and then gave them Javascript, they would very quickly be able to pick it up because they have the prior experience in general coding principles. If you train a human in say, tennis, they would be able to really quickly pick up table tennis, badminton, any other racquet sport because of their transferrable skills. AI do not have these transferrable skills. An AI that has mastered generating oil paintings has to start from 0 again if it wants to generate realistic landscapes. A human will take a bit of time to learn this, but has a massive headstart compared to someone that has never painted, but an AI can only start from the beginning as if it was just born because it cannot transfer skills.
Also similar to this, I debate your claim we have achieved thinking on a computer, we most certainly have not. Babies learn constantly and pick up things they’re never taught, because humans are naturally curious and actually think about things independently. That’s how children can learn so easily without people specifically teaching them, they observe their environment and pick things up really well. AI cannot do this, AI needs to be handfed information in a specific topic. If you try make the information too broad, it just fails and becomes awful and nonsensical at everything it attempts. So you have to hone in on something really specific and train it only in good examples of that, as an AI also has no instinctual way of discarding duff data. That’s why you see LLM’s hallucinating even in the most advanced models, or spreading misinformation. It’s also why there’s so much work in curating the dataset. You see many new AI formed from the boom where people have fed in other AI’s work, and then you get these new ones generating completely incomprehensible garble, text and images that give the illusion of understanding, but nothing in them seems to actually form a coherent shape.
I have used current “top” AI software to try and generate code, and it turns out it’s actually not very good at creating code either. It can only remix what it has seen, therefore it can never use code more recent than its dataset. It’s also frequently wrong, often the code you generate just doesn’t work, or if it does work doesn’t do what you want it to do. In the case of LLM’s they just guess the next word, and have no oversight of the full meaning of what they’re writing, therefore they will always be limited and unable to truly create new ideas without much more research.
I also dislike your framing that we must push on with all technology, cause if we never did we’d still be all farmers. That’s very disingenuous and ignores the many technologies that were never adopted because they were not practical, or ethically dubious. Pushing forward doesn’t mean adopting every new technology, we need to have a reasonable filter for garbage, else NFT people would have had the true anarcho-capitalist nightmare they wanted.
I don’t disagree that people should work on AI to be honest, but I don’t think ignoring or dismissing the criticism is the way to do it. Users are not in control of the AI, AI will not be shaped by what people choose to use, it’ll be shaped by what the developers want it to be. And even if you truly believe that your choosing to use AI is somehow shaping the industry, then how can you square that with “non-adoption is not an option”? If users decide how AI is used, then people choosing en masse to not use AI would cease its use and is a valid choice.
Back before we had Deep Blue in Chess there was no money and no users for AI in chess because it was so bad. The people that persisted in developing did it purely because they wanted to, and now we have Stockfish which is 100x better than any human could ever really hope to achieve. It wasn’t the users that decided this, it was the developers.
And to my earlier point also, Stockfish is an incredible neural network and is unbelievable at chess, but instantly struggles with any variants that stray from the normal 32 piece set-up, because it never trained on that. A Grandmaster can play a couple games of Fog of War and get the strategies and have a really strong baseline immediately. Stockfish would need to start from scratch, training in thousands of games of that specific variant before exhibiting any semblance of understanding. Because these neural nets cannot apply general concepts to new topics.
All in all, the way I see it is that using AI models uncritically is not actually furthering your brand, nor is it furthering the cause of AI. Whether you use the models or not will not impact the developers choices and directions at all, and the truth is that many of these AI models take artists copyrighted data without permission to train. There’s a difference between an AI being trained by data contributed by users, to the Adobe-type AI where existing users are automatically opted in, and then to the many, many AI which just scrape the internet with absolutely zero regard for given licences and copyright status. If I took 100 photographs, and cut bits out from each, pasted them together and posted it for sale people would be rightfully annoyed. There’s a reason people have commercial licences, why you get many products which are free for personal use, but require payment if used in paid-for products, artists should be paid when their work is used. I just think it’s disappointing to see the way you have reacted to the backlash, as you talk about having a discussion but it seems like your entire post was just dedicated to defending the decision. At least some recognition of the ethical issues would be nice. You may not be the experts, but you seemed confident enough in your understanding of AI to talk about what you think all the positives are. Talking about the positives and saying “read the philosophers” for the negative ethical implications isn’t a great cop-out.
I really liked a lot of the designs, but given I have no way of knowing what I’m actually supporting, I cannot in good conscience buy from you, which is a shame. If that’s your choice that’s okay you have the freedom to do that, but I just feel you don’t understand current AI as well as you think you do.
AI usage is going to cause more damage to your business than anything. Do you even know who your target audience is? I only came across your site as I was looking for alt clothing. Seeing the weird AI models was a huge turn off and I cannot believe or trust that your new/recent designs aren’t also AI. I’m going to assume I’m not the only potential customer you’ve lost. AI is not ethical, so stop calling yourselves that. I hope losing your clientele is worth the pennies you save.
I’ve seen your dresses come up on recommendations and my feed before and love the witchy/cottagecore aesthetic but no company that uses Ai and ignores its customers concerns can claim to be ethical in my view. I’d rather support businesses who support the creative industries and other small independent companies or artists.
I hope you read and learn and change your mind.
Leave a comment