I have compiled a little list of the enhancements I think DigitalOcean (DO) should be making
These enhancements are linked from the DO community feedback web site.
DO competitors allow for pooled bandwidth, the community would like the same for load balanced and floating IP enabled networked droplets.
Do should improve the customer experience during DDoS attacks. For some of the inexperienced customers, DDoS attacks can be a scary and lonely time when support is needed most.
I have been hit by DDoS attacks in the past and have built systems to mitigate and end DDoS attacks. I am more fortunate to have the experience if it were to happen again.
Labels make everything better. It is as simple as that.
I think labels make everything better on GitHub issues and pull requests; DO need to add this as soon as possible.
Customisable labels and download enabled and upload enabled label settings should be the stretch goal for DO.
Similarly to the above #enhancement, DO should add description fields for all droplets. A simple addition which will make a huge difference.
IPv6/64 is being considered by DO, so with that one would hope for more IPv4 addresses, but there are not enough IPv4 addresses to do this. Unfortunately, it is as simple as that. DO could buy great big chunks of IPv4 addresses, at great cost, from Governments (such as the UK), who have vast ranges of the IPv4 spectrum sitting doing nothing. They have been selling off big ranges of them recently. IPv6 cannot be mass adopted quick enough.
I wrote the above in the feed. I think this is unlikely to be successful for all data centres and all customers, so I wanted to raise awareness of this.
I am sure DO could find more IPv4 addresses if it tried, but the cost involved is not appetising.
You could build a load balancer from a droplet with private networking. Nginx is an excellent load balancer. I think floating IPs need to be enhanced to have a lot more well-rounded SDN features.
The last point I made in the feed is the most important and likely to be implemented. Floating IPs need to be enhanced with extended support from the SDN at the heart of the DO data centres.
Floating IPs are all well and good, but they seem a little underwhelming to me.
I am working on the MSD CDN, so I may well be a little spoilt.
Uploading an image on most people's Internet connections are going to be tough and will result in DO requiring dedicated ISP/IXP links for just the uploads of ISOs. There would have to be large amounts of caching between DO and the ISP/IXP because many people would be uploading large ISO files at different bandwidths or data rates. This is all doable, but it is not as easy as one might expect. It will take a decent amount of work from DO and ISP/IXP. I think support ticket based service would be a little annoying. Upload and go in my opinion. Community run ISO upload system could be interesting with approval by the community for opening the ISO to the whole community.
This is something I would very much like to see DO implement.
Custom, community or user built or just uploaded ISOs will be a huge bonus to the community and the individual user.
MSD has its own ISOs for a wide range of OSes. I would very much like to use them in DO data centres across the world.
I am a massive user of FreeBSD and other FreeBSD based distributions.
I use FreeNAS and have built MSD FreeSAN from FreeNAS.
I have made my own forum suggestion for FreeNAS and other SAN software to be made available in a SAN specific area of the Droplet web page.
I would like to be able to configure my own Cloud SAN for use with DO's Droplets and network architecture.
This is what one very informed member of the community had to say on the matter in the feed:
This is not possible at the moment. Allow me to explain. I was looking into that for some VMs I am running here and this is what I found: The way the x86 architecture evolved from the early 80ies until today makes emulating a GPU extremely difficult: the first video was memory-mapped, then it was port-mapped, then it was a combination of the two, and with 3D acceleration it is effectively an embedded computer within your computer, complete with dedicated memory banks and an assembly language of its own. And all of that is designed to be backwards compatible so your computer's BIOS can initialise it. Emulating that requires one of those GPUs to be reverse-engineered, but doing so nowadays is a sure way to get served and dragged into court for copyright infringement. After all, if you can do that to, say, a Nvidia chip, what's to stop you from manufacturing cheaper clones? To my knowledge, the best GPU card to ever be reverse engineered in this way is... drum roll... Cirrus VGA. I think you just *might* be able to run Windows 95 on that. This isn't a unique problem to the x86 architecture. Back in the early 2000s, this was a problem with the x86 architecture itself: your CPU's instruction set is backwards-compatible all the way back to the 8086 CPU back in the early 1980s by means of special "trap" instructions to enable different extensions as your computer boots a modern OS. Reverse-engineering the x86 architecture was extremely difficult and virtual machines were super slow and used to crash a lot. Early hypervisors were actually not even designed to emulate the x86 architecture. x86 virtualisation only became practical relatively recently, in 2005, when Intel introduced the virtualisation extensions (VT-x), which allows an Intel CPU to emulate itself through hardware. Overnight there was no need to emulate an Intel CPU anymore. There is still no real equivalent for a GPU, though: the best I've seen today is the so-called Intel VT-d extensions, which allows an Intel CPU to virtualise its PCI bus, which in turn allows you to expose a physical dedicated GPU to a virtual machine. Needless to say, you need a physical GPU for that and many popular GPUs don't even work with that. When you run software such as VMware Workstation/Fusion or Parallels Desktop, you are running a hypervisor which only really emulates an old and slow GPU, only used to complete the install of your guest OS. Beyond that, you need to install special "bridge" drivers which funnel your guest OS's display APIs into your host OS's display API by some means. This is what all the hypervisors do, without exception! This is the only practical way the GPU emulation problem is sidestepped and you get to enjoy some decent video on a virtual machine. This, unfortunately, does not extend to the data centre: a server motherboard meant for a rack-mountable enclosure often has no GPU card to speak of, or if it does it is something very simple, only meant to drive the rare console caddy at VGA resolution. Even if that weren't the case, there is no standard transport mechanism capable of ferrying the tens of gigabytes of data a GPU creates every second to your screen. The longest DisplayPort cable is 50 feet, I think. You are asking Digital Ocean to do the impossible. Don't hold your breath.
I completely agree with the above user's response. I do wish to raise the point this is feasible, has been done, can be done again and has been done by myself.
I would like to point out the costs involved and the main issue I have with the above user's response.
Costs of gaming GPUs: £400 to £2000 / $300 to $1500.
Cost of server or workstation compute and/or rendering GPUs: £1000 to £4000/$800 to $3200.
, Of course, there are the exceptions for both spectrums, where the cost will be lower than stated or much higher than stated.
The idea of a user wanting to stream renderings from a Cloud server is one of a lunatic. Cloud infrastructure is not geared for that sort of data with live rendering and continual frame syncing. It cannot cope. Not to mention the vast lag you will have between manipulation and rendered output.
The point of compute GPU or rendering GPU based compute farms or render farms is to compute or to render. Not to live render. A final render. You have a live render farm locally or a live compute farm locally and a much more powerful full scale compute or production render farm in the cloud.
The above list is a well-rounded look at some of the main points I would like DO to take into account as quickly as possible to address and potentially turn into features.
I would like all of these suggestions to become features.
They all have their merits and will enhance the workflow of developers relying on DO's outstanding support and community engagement, for which I commend DO very highly.