Ask Slashdot: Is Dockerization a Fad? – Slashdot | xxxAsk Slashdot: Is Dockerization a Fad? – Slashdot – xxx
菜单

Ask Slashdot: Is Dockerization a Fad? – Slashdot

四月 30, 2019 - MorningStar

Follow Slashdot blog updates by subscribing to our blog RSS feed

 


Forgot your password?
Close

binspamdupenotthebestofftopicslownewsdaystalestupid freshfunnyinsightfulinterestingmaybe offtopicflamebaittrollredundantoverrated insightfulinterestinginformativefunnyunderrated descriptive typodupeerror

Check out Slashdot on LinkedIn & Minds! | Migrate from GitHub to SourceForge quickly and easily with this tool. Check out all of SourceForge’s recent improvements.

×

109899188 story

Ask Slashdot: Is Dockerization a Fad? - Slashdot Ask Slashdot: Is Dockerization a Fad? - Slashdot Ask Slashdot: Is Dockerization a Fad? - Slashdot Ask Slashdot: Is Dockerization a Fad? - Slashdot Ask Slashdot: Is Dockerization a Fad? - Slashdot

Ask Slashdot: Is Dockerization a Fad? 90

Posted by EditorDavid from the containing-your-excitement dept.
Long-time Slashdot reader Qbertino is your typical Linux/Apache/MySQL/PHP (LAMP) developer, and writes that “in recent years Docker has been the hottest thing since sliced bread.” You are expected to “dockerize” your setups and be able to launch a whole string of processes to boot up various containers with databases and your primary PHP monolith with the launch of a single script. All fine and dandy this far.

However, I can’t shake the notion that much of this — especially in the context of LAMP — seems overkill. If Apache, MariaDB/MySQL and PHP are running, getting your project or multiple projects to run is trivial. The benefits of having Docker seem negilible, especially having each project lug its own setup along. Yes, you can have your entire compiler and Continuous Integration stack with SASS, Gulp, Babel, Webpack and whatnot in one neat bundle, but that doesn’t seem to dimish the usual problems with the recent bloat in frontend tooling, to the contrary….

But shouldn’t tooling be standardised anyway? And shouldn’t Docker then just be an option, who couldn’t be bothered to have (L)AMP on their bare metal? I’m still skeptical of this Dockerization fad. I get it makes sense if you need to scale microsevices easy and fast in production, but for ‘traditional’ development and traditional setups, it just doesn’t seem to fit all that well.

What are your experiences with using Docker in a development environment? Is Dockerization a fad or something really useful? And should I put up with the effort to make Docker a standard for my development and deployment setups?
The original submission ends with “Educated Slashdot opinions requested.” So leave your best answers in the comments.

Is Dockerization a fad?

Ask Slashdot: Is Dockerization a Fad?

Comments Filter:

  • by alphad0g ( 1172971 ) writes: on Sunday June 02, 2019 @07:30PM (#58697284)

    With docker you can scale out and you take your docker containers (usually) to other sites much easier then you can transport that lamp stack running on an OS.

    If you don’t need these things then docker is probably overkill.

      • Let’s say you have a simple web application. Load balancing is just deploying the same package over multiple web server installs. The issue there is that your package is standard, but the OS, the web server etc tend to be installed or managed manually. So essentially some admin can unknowingly change something that causes failures or slowness that is hard to trace. Docker simple prevents such a scenario. The thing is Docker is something that can be learned in 2 hours so while the benefits might be trivial i

      • by Anonymous Coward writes:

        If you are doing Docker commit on everything and using it like a VM yes but that sort of misses the point, thought in some cases it still might make sense.

        The ideal way is to script your Docker packaging, so swapping base image becomes trivial. Think of it as having a fully scriptable VM package that is a little more light weight than full fledged VMs.

        Doing that with VMs rather than Docker is a little more cumbersome, but if you have discipline it’s nothing that can’t be done…. except that VMs still have

      • by crow ( 16139 ) writes: on Sunday June 02, 2019 @08:00PM (#58697404) Homepage Journal

        Scalability is gained because Docker has a lower overhead than virtual machines. In theory, you can run tons of Docker images on the same physical machine, and they’ll share all the elements they have in common, so for storage in particular, you eliminate a lot of redundancy. Also, it makes it much easier to run a very minimal production image without any development tools, instead of a full Linux distribution which is what most people would end up with on a VM solution. That also makes it easier to avoid running any daemons that are normally part of your distribution but not relevant for your application.

        Of course, that’s all in theory. I haven’t needed to put it to practice myself, so I can’t comment on how well it works out in the real world.

          • You want to give a job to a guy who talks fairly well and then says “I think – because I’ve never really done it”?

    • That isn’t really true, puppet, chef, etc. all did basically the same thing for applications/environments. Its more about isolation to which allows people to more effectively share resources.

      Scalability is still entirely the obligation of the application developer.

      • Docker is a new hammer, and everything is a nail. Running unprivileged workloads has a security benefit, save for the bugs found so far, and the fact that Docker runs as root.

        Docker is seductively simple, and such things always get misused. This said, fast orchestration and tear down for compliance sake is pretty simple– if you pull clean workloads and keep them clean.

        Learn Libvirt, or do it manually with cgroups, lxc, and various secret file system and networking sauces. The diversity in choices is why do

  • I’ve personally only used docker a little bit, but where I have found it useful is on my QNAP. The QNAP OS doesn’t have the full range of tools available, and not everything is available via entware. Docker is able to fill the gaps.

    For example, I have a tool I’ve written that is several times faster using pypy than python (the BeautifulSoup library is so much faster using pypy). Unfortunately, pypy isn’t available in entware, and I’ve failed in getting it compiled for my QNAP. However, there’s a docker container with pypy that I was able to install into Container Station, and I’m able to run it using that.

    • Of course, you can also use LXC on the QNAP, and for some things it’s more appropriate (e.g. I have LXC containers acting as VPN clients and SOCKS5 proxies).

      Basically, I use Docker where I just want to be able to run a tool, and LXC for when I want a complete environment (including cron, etc).

  • If you don’t know how to write an automated deploy script, then it’s helpful. A lot of people don’t know that, for some reason, but it’s not going to change.

    There are some other advantages, for example, it makes it easier to transfer between AWS and Google cloud, but that’s mostly not why people use it.

    Docker will lose popularity if someone figures out an easier way to deploy.

  • by malkavian ( 9512 ) writes: on Sunday June 02, 2019 @07:51PM (#58697356)

    That, to me, is what containerisation is all about.
    Having a database engine that’s effectively isolated from the web server host, and those sitting on one virtual (of which you can have many), adds a layer of security.
    Then there’s the ability to have either ephemeral or persistent containers (or any combination of those) on a single virtual.
    And swarm clustering for high availability (or even powering up more nodes on underused hardware when one of your apps needs extra grunt, and scaling back when it doesn’t).
    I think there’s a lot to be said for it, personally. But it’s an option I’d only bring in on very mature infrastructures. There’s absolutely no point in bringing in containers if you’re not capable of maintaining the hardware that comprises your infrastructure. Or if the virtuals that sit on your hardware aren’t second nature, and backing up and recovering them is child’s play.
    Way too many people think that it’s simplicity to run ‘disposable’ containers, and they will definitely work great, until you discover that something really isn’t right.
    That’s when you absolutely have to have a full, and complete understanding of everything you’ve done, and how it _really_ works.

    As long as you have the mature infrastructure and skill base, then containers are damnably useful. But they’re definitely ‘cherry on the cake’.

    • Having a database engine that’s effectively isolated from the web server host, and those sitting on one virtual (of which you can have many), adds a layer of security

      How does it add a layer of security? If someone gains access to the web server host, then they have access to the DB, because obviously the credentials and ability for the web server to access the DB must reside there. At the very least they could simply modify the existing code that accesses the DB to do other things.

      Now, in addition to the web server being an attack vector, the DB itself is also an attack vector that can potentially be accessed directly, since it resides elsewhere and allows (at least s

      • Indeed. It does _not_ increase security. I can make security someone else’s responsibility (great for the standard developer that has no clue about security), but it always is _your_ problem when it fails.

    • You pay for that with potential security problems within each container and in the containerization layer. It actually decreases security in general. It may increase the skills an attacker needs slightly. What you want instead if you do security is something like SELinux or AppArmor, configured restrictively and, of course, good use of the standard UNIX isolation model. But that needs real skill, hence this ElCheapo approach and the fantasy that this is good for security.

    • There are many ways to separate concerns. If a separation technique is burdensome, then extra separation may not be worth it.

  • I mean technically, you don’t need jar or war files to deploy a Java project either, but I wouldn’t describe those as “fads”. They are just a convenient way to pull together a complete package in one file. Docker is sorta like that, it takes the entire system deployment (the OS library level, the runtimes that you need, and hooks for configuration for different systems and makes them into a convenient image that can be versioned and served from repositories. You could say the same thing about VMWare files and Vagrant before that. The advantage is that Docker is smaller and a bit easier to build. It’s easy enough that it’s become somewhat of a standard for Kubernetes, AWS Fargate, and lots of other platforms. Will something come along to take its place? Maybe? Probably? We’re already seeing the next level of deployment artifact in Helm charts for Kubernetes, though they tend to include Docker.

  • by jemmyw ( 624065 ) writes:

    Containers are not a fad. They’ve been around longer than docker. Docker itself might not be the long term solution, but containers will remain and evolve. Docker is just some tooling that made containers easier, and I think there was pent up demand for being able to create an immutable image for a single exe.

    • by Anonymous Coward writes:

      Docker, and all other containers, are just chroot with more security holes for people that are scared of command lines.

      They are all fads.

    • That sounds very much along my thoughts. Some workloads are supremely better at being containerized (such as things you would otherwise consider using a chroot jail for), and some aren’t. But Docker allows you to easily replicate the same kind of environment for local development as production, which avoids the common “it works on my machine” problems even Vagrant VMs can have.

      And then there are CaaS offerings like Amazon’s Lambda/API Gateway and CodeBuild services that use containers even though you’re not

  • by Anonymous Coward writes:

    I’m looking for a job. I’ve been on maybe 7-8 interviews and if the company doesn’t currently use docker, it’s on their roadmap. I’ve only used it a little, not for a full tooling, and it’s always frowned upon when I mention this. Every interview I go to either has, or is working towards, continuous deployment and the fact that I don’t have this is heavily frowned upon too. I don’t know what the next fad will be, but this one is pretty strong at this point.

    • Every interview I go to either has, or is working towards, continuous deployment and the fact that I don’t have this is heavily frowned upon too.

      In the interview, just say how much you like it, how great you think it is.

  • It is, of course, not the silver bullet some are trying to sell. But, then again, this happens often in the industry – witness things like object-oriented design or Agile.

  • I’ve only been using Docker about a week, so take this with a grain of salt.

    Docker itself may go away, but something similar to it is likely here to stay. When I say “something similar to it”, I mean something with its own pids, files and IP ports.

    I think that because trying out the next release of your dependencies becomes as (theoretically) simple as changing a version # in your docker-compose docker file. It may not go smoothly, but at least you know exactly what was required to get things worki

  • by steveb3210 ( 962811 ) writes: on Sunday June 02, 2019 @08:02PM (#58697410)

    Docker’s a mixed bag – we’ve had alot of problems with their network stack using the swarm to the point that I would not recommend it to anyone for production use.

    I need to figure try Kubernetes-ing (gerand?) these things and see if life is more smooth.

  • Calling it a fad tarnishes the benefits it can have for customers using it to do incremental things like breaking up their monolith, but I doubt it’s the future. Much like hosting your own infra was consumed by IaaS, the general trend is for developers to care less. Currently, serverless has some downsides but in the long run it represents a higher level of abstraction and an obvious next step. There may be containers in use (e.g. https://cloud.google.com/run [google.com] but the end user will operate within functions.

  • no its not a fad.

    Its an efficient and lightweight way to get stuff working, while maintaining isolation of the various pieces. This makes managing upgrades and changes in the various pieces simpler because they are isolated.

    Having apache and mysql in different docker containers, for example, has some real niceties.

    Or if you want to run more than one lamp stack application on the same physical PC; containers can make that easy. Because you don’t have to worry about interactions and conflicts between the two

    • It is actually heavy-weight, even if that is hidden. Because now, instead of administrating one system, you have to administrate one system and n containers. Sure, deployment gets easier, but everything else gets harder. And container security with Docker just sucks.

      • Docker really isn’t that heavy compared to bare metal. And its lightweight compared to virtualization.

        And I’m not sure what you think is ‘harder’. I find administrating containers to be a lot simpler because of the isolation. I only have to concern myself with the one service in the container. I don’t have to worry about dependency conflicts between different services, I don’t have to worry about much of anything.

        I can upgrade services, restore them, swap them around, migrate them to other physical hosts…

  • by MikeRT ( 947531 ) writes: on Sunday June 02, 2019 @08:12PM (#58697462)

    Docker Compose has been a game-changer for a lot of development teams. DevOps can customize docker images as needed, create a Docker Compose configuration and hand it back to the developers.

    New guy comes on, most of the onboarding is this now:

    1. Get laptop.
    2. Install Docker.
    3. Install Git.
    4. Checkout our repo.
    5. docker-compose up
    6. $RUN_INITIAL_BUILD

  • by LynnwoodRooster ( 966895 ) writes: on Sunday June 02, 2019 @08:16PM (#58697484) Journal

    Dockers – in fact, chinos in general – went out in the mid 2000s. It’s all jeans now, ideally non-denim based jeans, in black or a bright color (light brown, red, green, etc). Give it a while, bellbottoms are coming back in now. Dockers will be back in-style sometime around 2032-2035.

    • It increases complexity, just like virtualization. That is always bad for security, even if often not readily obvious.

  • …it’s a procedure, done under a local anesthetic or nitrous.
    Any competent vet can do it for ya.

  • Docker and especially Swarm itself is a great system, it has basically fixed a lot of issues and combined ideas from LXC, BSD jails and Solaris containers. The problem is that everyone then continues building a stack-on-a-stack, which is what Kubernetes and co are.

    As long as you know what it’s for and follow some basic guidelines, it works well. The problem is that you now have a hammer and everything becomes a nail.

  • PHP and Rails style development patterns came before Docker was popularized. So they don’t really fit naturally into that paradigm. Drupal expects you to copy and modify auto-generated templates, and have something like NFS shared storage for HA setups. A more modern app would probably use an S3 compatible object store, and (in all honesty) be written in Go.

    You know what’s great about docker?

    * Immutable artifacts. If you build your container correctly, it will be the exact same package on your laptop, stagi

  • OK. This has nothing to do with pushing a laptop into a chunk of plastic to make it a real computer. Good. Now I’ve learned something today

  • Basically, the containers are not administrated by you. If you are a competent sysadmin, that is a disadvantage, potentially a huge one. If you are a typical modern developer that knows nothing about system administration, this can seem like an advantage though. It is possibly why this non-idea is so successful.

  • It has it’s use cases. Everything does not need to be “dockerized”. Some things make sense – like Minecraft. I’m not running each if my LAMP apps in their own docker though. Just the wrong use case.

    • A lot of things are just used in the wrong place at the wrong time. The “fad” is in the misuse, not necessarily the mere existence. As they say, use the right tool for the job.

      Too many developers or architects use the work-place as a “resume lab” to pile up buzzwords. Unless your org intentionally is into R&D, limit your experiments to a mild level.

  • Good day my friend, you are cordially invited to join the great brotherhood of illuminati, to acquire fame, wealth and power. joining the light and moving out of darkness are the things we have been looking forward to see in our existence, now gain a chance to become a member of the brotherhood of illuminati and be rewarded with great benefit, and all your heart desires will be restored into your life. NOTE: BENEFITS GIVEN TO NEW MEMBERS WHO JOIN ILLUMINATI. A Cash Reward of USD $300,000,000 USD A New Slee

  • …but not until a better container solution comes along and dethrones it. Containerization is not a fad, but the specific implementations might not be.

There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.

Slashdot Top Deals

NOWPRINT. NOWPRINT. Clemclone, back to the shadows again. – The Firesign Theater

Close

Close

Slashdot

Ask Slashdot: Is Dockerization a Fad? - Slashdot Ask Slashdot: Is Dockerization a Fad? - Slashdot

Working...


Notice: Undefined variable: canUpdate in /var/www/html/wordpress/wp-content/plugins/wp-autopost-pro/wp-autopost-function.php on line 51