There is certainly overlap between OpenStack and other what I would call regional or data center type orchestration tools do, but many aspects of each framework are distinct. Architecturally only you can determine if you can live with one, the other, or both.
Fundamentally OpenStack provides a lower-level (VM, container, metal provisioning, vendor-specific integration) control of infrastructure than any container-only framework. I would say that fundamentally OpenStack is a "cloud operating system" and container-based frameworks are more "application delivery systems". Depending on your your requirements you might not need OpenStack for this, in fact you might no need Kubernetes or Swarm either since you can acquire resources from Amazon.
Professionally I have deployed OpenStack instances to control underlying infrastructure (network, storage, etc.) and run containers within OpenStack.
Some additional thoughts:
Distributed system scheduling and orchestration is an area of my research and as a result I have spent a great deal of time thinking about such things.
I generally think of orchestration systems based on scope of their control and break them down into three groups:
-High performance cluster (HPC)
-Hadoop, Spark, etc.
Data center / Region:
-Microsoft Quincy and Apollo 
-Google Borg and Omega
-Typically application specific
Google Borg : a large-scale cluster management software, which until recently* was considered “Google’s Secret Weapon” .
-Two-phase scheduling: find a suitable node, score and schedule best suitable node.
-High (service) and low (batch) priority scheduling, with independent resource quotas.
-Typical scheduling time is 25s. However, global (cluster) optimality is not attempted when making scheduling decisions.
Apache Mesos : an open-source cluster manager providing resource isolation and sharing across distributed resources.
-Mesos began as a research project  in the UC Berkeley RAD Lab by then PhD student Benjamin Hindman*
-Mesos has been adopted  by Twitter, eBay, Airbnb, Apple and at least 50 other organizations.
-“Mesos is a distributed systems kernel that stitches together a lot of different machines into a logical computer. It was born for a world where you own a lot of physical resources to create a big static computing cluster.”
Kubernetes by Google : is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts.
-Kubernetes is based  on Google's Borg and "The Datacenter as a Computer”  papers.
-Kubernetes partners include Microsoft, RedHat, VMware IBM, HP, Docker, CoreOS, Mesosphere, and OpenStack*.
-“Kubernetes is an open source project that brings 'Google style' cluster management capabilities to data centers.” 
-“Kubernetes goal is to become as the standard way interact with computing clusters. Their idea is to reproduce the patterns that are needed to build cluster applications based on experiences at Google.”
Note: From a scheduling and orchestration level, these are not global (multi-zone) schedulers!
-Most data center scheduling is based on bin packing optimization of CPU, memory, and network bandwidth resources, where resources are assumed to be uniform (by value).
-“Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones”
-Application-centric schedulers like Fenzo* (for Mesos) is designed to manage ephemerality aspects that are unique to the cloud, such as reactive stream processing systems for real time operational insights and managed deployments of container based applications.
My server farm is for R&D purposes, so I have a small number of physical machines, have a certain amount of legacy infrastructure (which makes me no different than most IT shops), and I have to be prepared to mimic a diverse set of infrastructures. Some of my most powerful equipment, in fact, is only powered up if there's a paying customer who needs it - the noise and power requirements don't justify keeping that stuff online unless it's actually necessary.
I have 2 Docker hosts at the moment, one running CentOS 6 (my major workload) and one running CentOS 7 (because some Docker image builds done under CentOS6 can crash Docker). These are older machines, so the Docker containers are in discrete VMs, not in OpenStack instances. To run OpenStack, a machine should ideally have at least 16MB of RAM, and I'm not sure that one of my Docker hosts can physically go that high.
So my production Docker instances run without the benefit of OpenStack.
On the other hand, a great deal of effort has been exerted in the area of Container-in-Cloud support. Amazon's Elastic Beanstalk and EC2 features, for example. There are similar works in progress for OpenStack. And, if memory serves, Vagrant has a plugin especially designed to construct and spin up Docker containers in OpenStack.
Cloud instances provide the flexibility of being able to spin up and transport entire VM images and do so with a minimum of redundant resources. Docker instances share these virtues as well. So it's no surprise that people have been putting the 2 together.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.