I'VE HAD MY HEAD IN THE CREATIVE STORAGE CLOUDS FOR 15 YEARS
- candyandgrim

- 2 days ago
- 25 min read

If you haven't surveyed your creative storage landscape lately, the ground just moved.
Don't get 'cached' out.
Contents
The Market Looks Simple. It Isn't.
Virtual Drives: The Hidden Costs Nobody MentionsYour office broadband becomes a shared liabilityYour machine pays a performance taxFriction doesn't just slow people down—it trains them to route around the system
The InfoSec Argument—And Why It Doesn't HoldThe portfolio problem
The Team Composition Problem
The Platforms: What They Say vs. What They AreDropboxGoogle Drive / WorkspaceBoxOneDrive / Microsoft 365LucidLinkAdobe Creative Cloud + Frame.io
The DAM problem
The AI Question
The Archive Problem
The IP and Trust Question
Finding Your Answer: Three Different SituationsThe Solo Creative Who CollaboratesThe Agency: 5–50 People, Hybrid WorkingThe In-House Creative Team Inside an Enterprise
The Honest Conclusion
Quick Reference: Platform Comparison
Quick Reference: Which Platform for Which Team
There is a version of this article that gives you a clean table. Seven platforms, twelve columns, a winner circled in green. It would be tidy, shareable, and almost entirely useless.
Because the question "which cloud storage should my team use" sounds like a procurement decision. It isn't. It's a question about how your team is structured, how it will be structured in three years, who owns your client IP, whether your files can be reached by an AI agent, what happens to your archive, and—right now, in mid-2026—whether your enterprise has already locked you into something that's about to get significantly more expensive without your input.
This is that article. It's longer than the table version. It's worth it.
1. The Market Looks Simple. It Isn't.
The platforms most creative teams are using—or being forced to use—fall into a deceptively tidy list: Google Drive, Box, Dropbox, OneDrive, LucidLink, and to varying degrees Adobe's Creative Cloud storage layer. Most people in the industry know these names. Very few understand what fundamentally separates them.
There is one architectural distinction that matters more than price, more than storage limits, more than integrations. It determines whether your files exist on your machine or only in the cloud.
Sync-and-store platforms copy files to local storage. You work from disk. The cloud is a backup and a sharing layer. Dropbox (by default), Google Drive in Mirror mode, and OneDrive with local sync all work this way.
Virtual drive platforms show you files that live in the cloud and stream or download them on demand. Box (since approximately 2019–2020), Google Drive in Stream mode, OneDrive Files On-Demand, and LucidLink all work this way.
This is not a minor UX difference. For a graphic designer working on a 50MB brand document, it barely matters. For a video editor mid-timeline on a 400GB R3D file, it is the difference between a workflow and a disaster.
Most articles in this space don't make this distinction clearly. The platforms' own marketing actively obscures it. LucidLink's headline claim—"access files without downloading"—is technically accurate and practically misleading. Streaming is downloading. Just in smaller pieces, faster, with a local cache acting as a buffer. When that cache fills up (the default is 25GB), you are dependent on your connection speed, the platform's uptime, and your own cache management discipline to keep working. For a documentary editor with 2TB of camera originals, 25GB of cache is not a solution. It's a pressure point waiting to become an incident.
Box made a similar architectural shift around 2019–2020, retiring their sync client in favour of Box Drive—a virtual drive. For teams that had built workflows around full local copies, this was a silent breaking change. Files that used to be on disk were now on-demand. Creative teams, particularly in video, discovered the hard way that "on-demand" and "immediately available" are not the same thing under production pressure.
2. Virtual Drives: The Hidden Costs Nobody Mentions
The marketing for virtual drive platforms focuses on what you gain: instant access, no local storage consumption, always-current files. What it doesn't tell you is what you pay daily, in ways that don't appear in any pricing table.
Your office broadband becomes a shared liability
In a shared office environment, continuous file streaming is sustained bandwidth consumption. One video editor streaming 4K footage from a cloud filespace can saturate a standard business broadband connection. Two editors doing it simultaneously can make the office unusable for everyone else—video calls drop, uploads stall, web browsing grinds to a halt. The connection that was fine at 9am becomes a problem by 10.
This isn't a specific platform failure. It's the physics of virtual drives applied to large files in a shared network environment. A virtual drive client isn't making one clean download request—it's making continuous, sustained requests as the timeline plays and the cache tries to stay ahead. Multiple users doing this simultaneously creates contention that a standard office router wasn't specced to handle.
Upload is the harder constraint still. Most business broadband is asymmetric—download is fast, upload is slow. Every change written back to a virtual drive competes with everything else on that upstream channel. An editor saving a project file, a designer exporting assets, a producer uploading a brief—all of them fighting for the same narrow pipe.
Enterprise-grade symmetric fibre solves this. Most agencies aren't on it. And even those that are find bandwidth becomes a shared resource negotiation rather than an infrastructure given.
Your machine pays a performance tax
Virtual drive clients are not passive background processes. They run continuously: monitoring cache state, prefetching file segments, managing the connection, syncing write-backs. On a machine where a power user is already pushing CPU and GPU hard—a DaVinci Resolve colour grade, an After Effects render, a Cinema 4D simulation—that background overhead is not trivial. It competes for RAM. It generates disk I/O on top of the read/write load the application is already creating. On machines without discrete storage controllers it can introduce latency that shows up as dropped frames, stuttered playback, or render slowdowns.
The cruel irony: the users who need virtual drives to work most reliably—power users doing the heaviest media work—are exactly the users for whom the performance tax is most painful. Every spare CPU cycle matters when you're pushing a machine to its limit. A virtual drive client running in the background is the uninvited guest at that party.
Working from files that sit on a local SSD or RAID eliminates this entirely. The disk reads directly. No background process is managing a cache or maintaining a connection. The performance ceiling is the machine itself—not the machine minus the overhead of a cloud client.
Friction doesn't just slow people down—it trains them to route around the system
This may be the most important point in this entire article, and it's the one no platform will ever mention in its marketing.
When a storage platform creates enough friction—a cache miss at a critical moment, a slowdown under load, a file that isn't where it should be—users don't file support tickets. They adapt. The editor who gets burned once by a dropped connection mid-deadline copies the files to their local drive and works from there. The designer who finds the shared drive slow on upload starts saving locally and "syncing later." The motion artist who can't afford the performance hit just works off the SSD and uploads at end of day.
Within weeks, the shared platform is a partial truth. Some files are there. Some aren't. Nobody's entirely sure which version is current. The collaboration promise has quietly collapsed—not because the platform failed technically, but because the friction it created was just high enough to make working around it the rational choice under pressure.
This is how platforms fail in practice without ever technically failing. The system is up. The uptime metrics look fine. The SLA is being met. And the team has organically abandoned it.
It also has a cost implication: the organisation is paying for a platform it has partially stopped using, while also managing the version control chaos the platform was supposed to solve.
The best storage solution is not the one with the most features. It's the one your team will actually use consistently, under pressure, on a bad day with a deadline approaching and a client on the phone. Technical capability is secondary to behavioural reliability. Any evaluation that doesn't account for this is measuring the wrong thing.
3. The InfoSec Argument—And Why It Doesn't Hold
If you work in an agency or enterprise and you've pushed back on virtual drives, you've probably heard this: "IT and security prefer virtual drives because when someone leaves, we can lock them out immediately. No files on local machines means no data walking out the door."
The logic is coherent. Server-side access control means a terminated employee can be locked out in seconds, remotely, cleanly. Nothing persists on their local machine because nothing substantial was ever there. Compliance teams love it. Lawyers love it. CISOs can point to it as a control in their security framework.
They're not wrong about the threat model. Data exfiltration via departing employees is a documented risk. The instinct to control access at the server layer rather than the device layer is professionally defensible.
But it's the wrong solution to that problem, applied to the wrong people, at the wrong cost.
The architecture is designed for the termination scenario. The team lives with it every day. InfoSec is designing the daily working environment for the entire creative team around an event that affects a fraction of a percent of working days. The editor who needs to work offline on a flight, the animator whose cache runs out mid-render, the designer whose machine slows because the virtual drive client is competing for RAM—they're all paying a daily performance and workflow tax for a risk management model that triggers once, if ever.
There are better ways to solve the actual problem. Revoking cloud credentials on termination works regardless of whether files are locally synced—the sync stops, future changes don't propagate, new files aren't accessible. Device management platforms (Microsoft Intune, Jamf for Mac environments) can remotely wipe or lock company machines. Proper offboarding processes with IT involvement cover the genuine data risk without punishing everyone else's workflow every day.
And it may not even work. The friction that virtual drives create means users route around the system. When that person is eventually terminated and IT revokes their access, the files that actually matter to the data exfiltration risk may already be on their local machine—copied there weeks ago because the virtual drive was too slow to work from under deadline. The security model failed at the moment it created enough friction to push people offline. InfoSec has the appearance of control without the reality of it.
The portfolio problem
There is a dimension to this that is almost never discussed in IT or security conversations, because it requires understanding how creative careers actually work.
A motion designer's portfolio is their CV. A video editor cannot describe what they've made and expect to get hired—they have to show it. A finished broadcast package, a brand film, a motion identity system, a UI animation reel. The work itself is the evidence of capability in a way that most knowledge worker deliverables simply aren't.
The industry norm—largely unwritten, rarely explicit in employment contracts, inconsistently enforced—is that creatives retain the right to show work they created for portfolio purposes. Most of the industry operates on this assumption even where contracts technically prohibit it, because creative talent cannot function professionally without a portfolio of their own work.
When all work lives exclusively in a server-side virtual drive controlled by an employer's IT department, a departing creative loses access to their professional history on the same day their employment ends. Work they spent years creating—that represents their craft development, their career progression, their ability to get the next job—gone. Not deleted. Just inaccessible, which is functionally the same.
A creative who understands this—and most experienced ones do, from day one—will find ways to retain access to their own work throughout their employment. Not necessarily maliciously. Not necessarily with client data attached. But they will not leave their entire professional history in a system they don't control, accessible only at the discretion of an employer relationship that may end at any time without notice.
The virtual drive model designed to prevent data leaving the building virtually guarantees that experienced creatives will find ways to get their own work out, because the professional stakes are too high not to. The policy creates the behaviour it's trying to prevent—in a population with entirely legitimate reasons for it.
The real solution is not architecture. It's policy. Employment contracts that define portfolio rights explicitly. Defined embargo periods for sensitive client work. An offboarding process that includes a supervised portfolio export conversation. Client agreements that address whether finished work can appear in agency or individual portfolios. These are conversations that require legal, HR, and creative leadership in the same room. No storage platform substitutes for having them.
Designing collaboration infrastructure around distrust—and virtual drives are, structurally, an infrastructure of distrust—is noticed by the people working within it. It's not the reason creative talent leaves. But it's part of the texture of environments where people don't stay.
4. The Team Composition Problem
Here is something the comparison tables never address: your team is not one type of person.
A motion graphics artist working in After Effects on a 2GB project file has fundamentally different infrastructure needs to a video editor conforming a 10TB feature cut. A brand designer syncing 500MB of assets across two machines is not the same as a VFX artist pulling 200GB of plate photography from a shared drive. Grouping them under "creative team" and applying one storage solution is how workflows break.
The disciplines map roughly to storage needs like this:
Designers and brand teams are working with files that are large by document standards but small by media standards. PSDs, AIs, InDesign packages, motion graphics templates—typically 10MB to 2GB per file. Virtual drives are generally fine. Shared asset access, consistent file paths, and version history matter more than raw transfer speed.
Motion graphics and 3D artists sit in the middle. Cinema 4D projects with textures, After Effects compositions, Blender files, render outputs—these range from manageable to enormous depending on the project. The rendering pipeline especially creates file volume that surprises teams who haven't planned for it. A three-minute broadcast package can generate hundreds of gigabytes of render passes.
Video editors are the stress test for any storage platform. Camera formats like RED RAW, ARRI, ProRes 4444, and uncompressed 4K create individual clips that run to tens or hundreds of gigabytes. A single shoot day at a professional production level can generate 2–5TB. Proxy workflows—working from compressed stand-ins rather than camera originals—can make virtual drives viable, but only if the pipeline is properly set up and consistently followed. In practice, proxy discipline breaks down under deadline pressure, and when it does, the editor needs access to originals at the speed of thought, not the speed of their broadband connection.
The honest configuration for a mixed team is not one platform. It's a primary platform for shared asset access, a local or locally-synced working environment for heavy media, and a delivery and review layer on top. Most teams don't have this because nobody sat down and designed it—they landed on whatever the IT department approved or whatever the founding partner was already using.
5. The Platforms: What They Say vs. What They Are
Dropbox
The pitch: Your content, organised and protected—from storage to signature to review.
What it actually is: The most mature sync client on the market, now anxiously accumulating features to justify premium pricing.
Dropbox's genuine strengths are unglamorous but real. The desktop sync client is reliable in a way that has been tested over 17 years of real-world use. File Requests—the ability to receive files from clients via a link with no account required—remains best-in-class for friction-free inbound. Version history up to one year on Advanced plans has saved projects more times than most users count.
The extras—Replay for video review, Transfer for large file delivery, Sign for e-signatures, Dash for AI-powered search—represent a company that saw the commodity storage threat coming and is building a suite in response. Replay is a credible review layer for agencies doing regular video delivery, not as deep as Frame.io but cheaper and already inside your storage subscription. Transfer is a quiet WeTransfer killer for anyone already on a paid plan. Sign covers e-signatures at 3 requests per month—enough for low-volume contract work without a separate DocuSign subscription. Dash is interesting in concept but limited in rollout and early in maturity.
The honest risk: Dropbox is assembling tools that most teams already have elsewhere, and charging premium storage pricing for the bundle. If you're paying separately for a review tool, a file delivery service, and an e-signature platform, the consolidation case is real. If you're already covered, the premium is harder to justify.
The architectural fact that matters: Dropbox is local sync by default. The files are on your disk. For performance, for offline working, and for the agentic AI future—everything follows from that.
Google Drive / Workspace
The pitch: Everything in one place, AI-powered, built for how teams actually work.
What it actually is: The cheapest serious option at mid-tier, bundled with a productivity suite that creative teams use partially at best.
Business Standard at $14 per user per month with 2TB pooled storage is genuinely hard to beat on pure economics. Real-time collaboration on Docs, Sheets and Slides is the best available. Gemini integration gives you AI summarisation and semantic search within the Google ecosystem.
The storage itself is the loss-leader. Drive's file management is mediocre—Shared Drive permissions have quirks that confuse teams, and Google treats every file as an opportunity to open it in a Google application. Large video files sit in Drive like objects in a display case—present but inert.
The architectural choice is critical. Mirror mode—full local copy, offline capable, AI-agent accessible—is the right model for creative teams. Stream mode is the wrong one. Many teams default to Stream because it's lighter on disk space and don't realise what they've traded away until they're on a train with no signal and a deadline.
One flag worth naming: Google is an advertising company whose primary product is data. Workspace has strong contractual protections, but the cultural reality of who builds this infrastructure and why is worth being clear-eyed about, particularly for agencies handling confidential client work.
Box
The pitch: The intelligent content cloud—secure, governed, AI-ready.
What it actually is: Enterprise compliance infrastructure that made a storage architecture decision creative teams are still recovering from.
Box's pivot toward regulated industries was the right strategic call for their enterprise base. HIPAA compliance, audit trails, retention policies, watermarking—genuinely best-in-class for document governance. Unlimited storage from the Business tier upward is a real differentiator for document-heavy organisations.
The problem for creative teams is the 2019–2020 shift to virtual-only. Box Sync—full local copy—was retired in favour of Box Drive. There was no announcement loud enough to match the operational impact. Teams that had built video workflows around locally-synced storage discovered overnight that they were now dependent on connection quality and pinning behaviour. A 5GB file upload limit on the base Business plan—for a platform claiming to serve media teams—tells you where Box's priorities actually lie.
Box has not returned to local sync. It is not coming back.
OneDrive / Microsoft 365
The pitch: The cloud storage that works where you work—inside the tools you already use.
What it actually is: The most powerful example of lock-in-as-USP in enterprise software.
OneDrive's value proposition has always been "you're already paying for it." The MS Graph layer—connecting files, emails, Teams messages, calendar events—gives Copilot genuine contextual intelligence that no standalone storage platform can match. For document workers inside the Microsoft ecosystem, this is real value.
The product is not the problem. The lock-in is.
In June 2026, Microsoft began retiring standalone OneDrive and SharePoint plans. From that point, accessing SharePoint and OneDrive functionality requires a full Microsoft 365 subscription. For organisations that purchased standalone plans for cost efficiency, this is a forced migration to a more expensive product they may not fully use. Pricing increases of 5–33% take effect from July 2026 depending on plan.
The organisations most exposed are those that bought into the Microsoft ecosystem piecemeal and now face a coherent upsell strategy that was always the destination. Creative teams inside enterprise companies who have no procurement influence are the most likely to discover this on their renewal invoice rather than ahead of it.
For creative workflows: OneDrive was not built for video, large binary assets, or high-frequency file access under production pressure. IT teams choose it because it integrates with Active Directory. Creative teams live with it because they have no alternative. The Files On-Demand behaviour—evicting files to online-only when local storage is under pressure—creates the same problem as Box Drive: a file you expect to be local is suddenly network-dependent at the worst possible moment.
LucidLink
The pitch: A shared drive that feels local, lives in the cloud. Work on any file, from anywhere, instantly—no downloading, no syncing, no waiting.
What it actually is: A genuinely differentiated infrastructure product for specific use cases, oversold for general creative team use.
LucidLink's core innovation is real. It mounts as a drive on every machine. Everyone sees the same files at the same paths. Changes propagate instantly. It works inside Premiere Pro, DaVinci Resolve, and After Effects as if it's a local NAS. For distributed post-production teams working on proxy-based workflows, it delivers on the promise. Zero-knowledge encryption is the strongest security story in this space—LucidLink cannot access your files structurally, not just contractually.
The gap between the marketing and the production reality: "no downloading" is misleading. Streaming is downloading, distributed across time. The 25GB default cache is the ceiling that video editors hit first and hardest. A single camera magazine can exceed it. Camera original workflows at any professional scale require either pinning—manual cache expansion that demands discipline—or a fundamental rethink of where originals live.
In a shared office environment, LucidLink's streaming model compounds the bandwidth problem. The cache doesn't fill and stop—it continuously replenishes as new content is accessed, creating sustained network traffic that competes with every other activity in the building.
Multiple documented outages through 2025 and early 2026 illustrate the structural risk: when LucidLink goes down, everyone goes down simultaneously. There is no local copy to fall back on.
LucidLink is not the right answer for every creative team. The key questions are whether your primary editing workflow is proxy-based, your team is genuinely distributed, your office broadband is enterprise-grade symmetric, and your team has the discipline to manage cache actively under pressure. If all four are true, evaluate it seriously. If any one isn't, the gap between the promise and the production reality will frustrate you within the first major project.
Adobe Creative Cloud + Frame.io
The pitch: Creative tools and creative collaboration in one ecosystem—built for the people who make things.
What it actually is: The best review and delivery layer in the industry bolted onto storage economics that don't work at scale.
Frame.io's video review workflow is still the standard. Timestamp-accurate comments, version stacking, client approval without a client account, direct integration into Premiere. Nothing else does this as well for video-centric teams. CC Libraries—shared fonts, colours, graphic templates—reduces real friction for teams working consistently within the Adobe stack.
The brutal economics: you are already paying substantial per-seat Creative Cloud fees. Adobe then meters Frame.io storage separately above the included 250GB Teams allowance. For any team doing real video work, 250GB is one project, possibly less.
The 2023 Terms of Service controversy—where updated language appeared to grant Adobe rights to access and train AI on user content—caused lasting trust damage in the creative community. Adobe updated the TOS following the backlash. The structural fact that prompted it has not changed. For agencies handling client IP under NDA, this warrants ongoing attention rather than a resolved answer.
The honest configuration for an Adobe-first team: Frame.io for review and delivery, CC Libraries for shared assets, and a separate platform for primary storage. Most mature creative agencies arrive here eventually. The question is whether they planned it or discovered it expensively.
6. The DAM problem
Everything discussed so far assumes files live somewhere central that teams then access. A storage platform, a shared drive, a sync folder. But the reality for most creative teams in 2026 is that a significant and growing portion of their work never touches a centralised file system at all.
A designer iterating in Figma. A creative director reviewing in Frame.io. A motion artist generating in Krea or Runway. A strategist building in Notion. A social team working directly in Canva. None of those outputs are sitting in Dropbox or LucidLink—they live in the platform that created them, versioned by the tool itself, accessible via browser, exported only when someone needs a deliverable.
The storage platform isn't just competing with other storage platforms anymore. It's competing with the gravitational pull of every browser-based tool in the creative stack—each with its own storage model, its own versioning logic, its own sharing mechanic. And none of them talk to each other.
The result, for most teams, looks something like this:
The Figma file is in Figma
The video edit is in Frame.io for review
The final export is in Dropbox
The brief is in Notion
The generated images are in Krea or Midjourney's gallery
The brand guidelines are in a Notion doc linking to a Figma file
The archive... is unclear
Final approved content ends up in a DAM—disconnected from everything that created it
And the prompt, the seed, the model version that generated the hero image? Gone. Unrecorded. Irreproducible.
Nobody planned this. It accumulated. Tool by tool, project by project, as teams adopted whatever solved the immediate problem in front of them. And now the "storage decision" isn't one decision—it's six or eight decisions that nobody made coherently, resulting in a fractured asset landscape where institutional knowledge lives in platforms rather than files.
When someone leaves, things disappear. Not because files were deleted—because access to a platform lapses, a personal account held a shared project, or nobody thought to export the working files before offboarding. The work exists somewhere. Nobody can reach it.
The DAM doesn't fix it
A Digital Asset Management system should be the answer to this fracture. In theory it's the single source of truth for approved, finished assets—searchable, permissioned, version-controlled, connected to the tools that use them.
In practice:
Most agencies at the 5–50 person scale don't have one. What they have is a folder called "APPROVED ASSETS" that everyone has quietly stopped trusting, because the last three people who touched it had different ideas about what "approved" means.
Enterprise teams that do have a DAM often have one chosen by the marketing department, serving brand managers rather than production workflows. It holds logos and brand guidelines. It does not hold the C4D source file for the product visualisation, the Premiere project for the brand film, or the After Effects composition for the motion system.
The DAM holds the output but not the provenance. Which becomes a problem the moment anyone asks "where did this come from," "can we recreate this," or "what version of the logo is in that campaign."
The provenance gap—the problem nobody has solved yet
As generative AI becomes part of the creation pipeline—Krea, Runway, Figma Weave, Adobe Firefly, and whatever comes next—the question of what a "source file" even means gets genuinely complicated. The prompt is the source. The seed number is the source. The model version is the source. The guidance image is the source. None of that lives in a DAM field by default. None of it is captured in a file system. And none of the storage platforms in this comparison have a meaningful answer for it yet.
The image that became the campaign hero exists. It's in the prompt generation gallery of whoever generated it. If that person leaves, the gallery goes with them. If the account lapses, the generation history disappears. If someone asks six months later whether the image was generated with a model trained on licensed data, nobody can answer.
This is not a hypothetical risk. It is a live legal and operational exposure for agencies doing generative work for clients, and it sits entirely outside the storage conversation as it's currently being had.
The fractured landscape—browser tools, sync platforms, review layers, DAMs, and generative histories—means the file-based storage decision is necessary but no longer sufficient. The storage platform is the most visible part of a much more distributed problem. Getting it right matters. But getting it right while ignoring the rest of the landscape is tidying one room in a house that needs a structural survey.
7. The AI Question
Every platform in this space is now marketing AI integration. MS Graph and Copilot. Gemini across Workspace. Dropbox Dash. Box AI. The pitch is consistent: AI will help you find, summarise, and organise your content.
This is real, and it's worth less than it sounds for creative teams.
The AI integrations across all these platforms operate on the cloud side—they query an index of your files rather than working with the files themselves. For search and summarisation, genuinely useful. For agentic tasks—renaming assets by reading metadata, transcoding files, quality-checking deliverables against a brief, generating shot lists from folder contents, flagging missing files before the client does—a cloud-side index is insufficient. The agent needs to reach the actual files.
This creates an uncomfortable insight: the storage architecture that best supports agentic AI is not the newest or most technically sophisticated. It's local sync.
A fully local copy of your files is a complete, addressable file system. Any agentic system—Claude via MCP, Copilot operating locally, or any connected workflow tool—can traverse it, read it, act on it, and write back to it. The changes propagate to the cloud and to every other team member through the sync mechanism. The storage platform becomes infrastructure. The AI is the intelligence layer on top.
LucidLink's virtual streaming model becomes a liability here. An agent can only act on what's local. With a 25GB default cache against a 10TB project, the agent hits a wall immediately. The single source of truth in the cloud that makes LucidLink elegant for human collaboration makes it largely opaque to agentic systems that need to traverse a full file tree.
Dropbox's local sync model looks old-fashioned against LucidLink's streaming architecture. For agentic AI access in the next 6 to 18 months, it's exactly right.
The platforms selling native AI as a differentiator may be solving the wrong problem. The question isn't which storage platform has AI built in. It's which storage model lets AI agents actually reach your files and do something useful with them. The answer is less glamorous than the marketing: local sync wins, virtual drives lose, and the most AI-ready creative workflow may be the one built on the oldest architectural model.
8. The Archive Problem
Ask most creative agencies what their archive strategy is. The honest answer—delivered with varying degrees of embarrassment—is usually a collection of drives in a cupboard, a cloud folder called "OLD PROJECTS," or an expensive storage tier where completed work sits forgotten at full resolution and full ongoing cost.
This is not an archive strategy. It's deferred decision-making.
The economics of long-term creative storage work on a tiered model that almost no agency applies deliberately:
Hot storage (active projects, last 12-24 months): whatever primary platform the team is using. High cost, instant access, acceptable because the work is generating revenue.
Warm storage (completed projects, 2–4 years): Backblaze B2 or Wasabi. S3-compatible, GDPR-compliant with EU data centres, approximately $6–7 per TB per month with minimal egress fees. Files accessible within seconds. Appropriate for projects a client might reasonably request revisions from.
Cold storage (4+ years, rarely accessed): AWS S3 Glacier Deep Archive or equivalent at approximately $1–2 per TB per month. Retrieval takes hours. Intentional friction for intentional access.
The format question is underaddressed. ProRes files are large by design. A one-hour ProRes 422 HQ file at 1080p runs approximately 100–200GB. Converting projects older than four years to HEVC at high bitrate achieves a 6–8x file size reduction with visually indistinguishable results for playback purposes. The trade-off is real—HEVC is lossy, and re-grading or isolated frame extraction at original quality becomes impossible. The pragmatic middle ground: preserve a ProRes proxy master in cold storage alongside the HEVC display copy. Edit intent survives. Storage cost drops dramatically.
Photography RAW files should not be transcoded regardless of age. Archive them as-is and move them cold.
One model worth considering: the physical drive handoff. At the end of a defined retention period, archive assets are delivered to the client on a labelled, catalogued drive as a service deliverable. The client takes responsibility for their own archive. The agency maintains a cold cloud backup. This externalises the cost, creates a clean offboarding mechanism, and has genuine value as a premium service offering rather than an internal overhead.
9. The IP and Trust Question
Two events in the last three years changed the creative industry's relationship with the platforms it stores work on.
WeTransfer's 2023 Terms of Service update included language permitting the platform to use uploaded content to train AI models. The creative community's response was largely decisive. The underlying platform had not changed. The relationship to it had.
Adobe's 2023 TOS update included similar language. Adobe's response was faster and more substantive—they updated the terms and engaged publicly with the concerns. But the structural reality that prompted the controversy has not changed. Trust, once questioned at this level, requires ongoing evidence to rebuild.
Both events point to the same question every creative team should now be asking of every platform they store work on: what does this company's commercial model incentivise them to do with our content?
LucidLink's zero-knowledge encryption means they structurally cannot access your content. The encryption keys are held by the user. This is not a policy commitment that can be changed with a TOS update—it's an architectural reality. For agencies handling client IP under NDAs, this is the strongest story in the space.
Dropbox, Google, Microsoft, and Box all hold encryption keys. All have contractual data protection commitments. None have zero-knowledge architecture.
Adobe's position is the most complex because their commercial incentive to access creative content is the strongest and most direct of any platform in this comparison.
The right question is not "could they access my files" but "what happens if their commercial priorities shift and their TOS reflects that." History now gives us the answer. It deserves ongoing attention rather than a one-time answer.
10. Finding Your Answer: Three Different Situations
The Solo Creative Who Collaborates
You need reliable local sync, large file delivery to clients without friction, version history, lightweight review capability, and GDPR-safe storage.
Dropbox Professional remains the honest answer. The sync client is the most mature available. File Requests handle client inbound without account friction. Transfer replaces WeTransfer for outbound delivery. Replay handles basic video review. Version history covers accidental deletion.
Worth adding: Frame.io's free tier for review and approval on video work. Direct Premiere integration, professional client review interface, no monthly cost at low volume.
Budget-constrained alternative: Google Drive Business Standard with Mirror mode enforced. $14 per month, 2TB, full local copy, works offline. You lose the client-facing extras but the fundamentals are solid.
The Agency: 5–50 People, Hybrid Working
The most complex situation and the most underserved by the current market.
The disciplines in the team determine the configuration more than anything else. The honest recommendation is a two-layer model:
Primary storage and sync: Dropbox Business Advanced where local sync is non-negotiable and the editor/animator ratio is high. Egnyte Business for teams with hybrid IT environments that still include on-premises storage. Google Drive Business Standard in Mirror mode, enforced by policy, for design-heavy teams where file sizes are lighter.
Review and delivery: Frame.io for video-centric agencies. Dropbox Replay for agencies where video is part but not the centre of the offering.
LucidLink belongs in this evaluation for agencies doing distributed video post with a disciplined proxy workflow, enterprise-grade connectivity, and IT resource to configure it properly. It does not belong as a default recommendation for a general creative agency that hasn't assessed whether its workflows meet all of those conditions.
The In-House Creative Team Inside an Enterprise
You are probably already on OneDrive (or Google Drive...although they are less common there are some Google-first companies in addition to Microsoft-first). You may have limited ability to change that.
The immediate priority: understand your organisation's Microsoft 365 plan and what the renewal looks like in light of the June 2026 standalone plan retirement. If you don't know, find out before the next renewal. The difference between being informed going in and discovering it on an invoice is significant.
Within the Microsoft ecosystem: push IT to configure OneDrive's "Always keep on this device" setting for creative working files as a standard policy for creative machines. This converts Files On-Demand to functional local sync, restores performance, and makes the agentic AI future accessible.
Beyond Microsoft: the conversation worth having with IT is whether Frame.io for review and delivery, and a secondary storage layer for creative-specific workflows, can run alongside OneDrive. The argument is not "replace OneDrive." It's "give the creative team the infrastructure appropriate to what it produces."
11. The Honest Conclusion
The storage decision for creative teams in 2026 is not primarily about price. It's about architecture, trust, future readiness, and whether the platform you're on was built for people doing what you do.
Most were not. They were built for documents and adapted for creatives as an afterthought. The AI integrations being marketed across all platforms are optimised for text and compliance workflows, not for video, motion graphics, and large binary assets. The storage models that best support the agentic AI tools arriving in the next 6 to 18 months are, counterintuitively, the oldest ones—full local copies that any system can read and act on.
The platforms that held creative team trust longest solved real creative workflow problems rather than selling features that sounded good in demos. The platforms that lost trust fastest did so by prioritising their commercial AI models over the working relationship with their most loyal users. History now shows how quickly that erodes something that took years to build.
The smartest thing a creative team can do right now is not switch platforms. It's understand the architecture of what they're already on. What's local. What's virtual. What happens when the connection drops. Who holds the encryption keys. Whether the platform's commercial model aligns with the trust being placed in it. What the renewal looks like in six months.
Then make a deliberate decision rather than inheriting one.
The file storage conversation used to be simple. It isn't anymore. The teams that recognise this first will be the ones still working smoothly when everyone else is explaining to a client why the deadline slipped because the cache ran out.
Pricing and plan information current as of Q2 2026. Microsoft 365 standalone plan retirement effective June 2026; price increases effective July 2026. LucidLink plan structure subject to change—verify current pricing directly before committing. Adobe Frame.io storage inclusions vary by Creative Cloud plan tier.
This article does not represent sponsored content. No platform has reviewed or influenced the content prior to publication.










Comments