If you are using Amazon Web Services, consider using Elastic Path CloudOps for AWS to automate your infrastructure setup and deployment, instead of manually configuring your infrastructure and deploying using the Pusher scripts.
You should be familiar with the concepts and deployment process used by the Pusher deployment scripts. These are applicable to all deployment scenarios, regardless of whether the Pusher is being used.
These considerations apply to all deployments.
- With the exception of Cortex Studio, each Web App should be deployed to a separate application server instance. This simplifies the management, tuning, and monitoring of the Web Apps.
- The Cortex Studio Web App should be deployed to the same application server as the Cortex Server to avoid CORS (cross-origin resource sharing) issues. Cortex Studio is normally only deployed to test environments, as it is not required in production environments.
- The deployment package contains all artifacts required to deploy Elastic Path Commerce. See Deployment package structure
Clustering and Scaling
- The Cortex Server is sessionless and can be scaled horizontally behind a load balancer for availability and throughput.
- Auto-scaling may be used to handle peak volumes, based on your Elastic Path license terms.
- The Commerce Manager Server can be scaled horizontally behind a load balancer with sticky sessions.
- Load testing has been performed with 50 concurrent users per server. Additional users on a server instance may cause a reduction in performance.
- The default session timeout is 4 hours so that users do not lose unsaved changes due to unplanned interruptions. The longer session timeout is not expected to cause performance issues because there are a predictable and finite number of admin users.
- The Integration Server is also sessionless and can be scaled horizontally for availability.
- Normally two instances are sufficient unless there are very high order volumes or extensive back-end integration processing.
- A load balancer is needed only when exposing web services through the Integration Server.
- An alternative is to run one instance and automatically restart it on failure.
- The Batch Server is designed to be deployed as a single instance and should not be scaled horizontally.
- The Search Server depends on Solr Master/Slave architecture. See Search Server Clustering for configuration and management details.
- The Search Master can be deployed to the same VM as the Batch Server for small installations, or to a separate VM to handle larger catalogs.
- Only one Search Master instance can be running at a time. Search Master failure does not impact application availability.
- For Search Slaves, the simplest approach is to deploy a Slave instance to each node that requires search functionality. This works well for smaller installations, but may not scale for a large number of nodes or for very large catalogs. The overhead of replicating search indexes to multiple slaves needs to be weighed against deployment complexity.
- An alternative is to deploy a pool of search slaves behind a sessionless load balancer.
This section describes the major configuration points for deployment. See System Configuration for additional details.
- The database connection and JMS broker are defined using JNDI. For examples, see the pusher-package/the-pusher/templates/tomcat-<database>-context.xml files in the devops project.
- The Commerce Database JDBC data source is defined by the jdbc/epjndi JNDI resource
- The JMS broker connection is defined by the jms/JMSConnectionFactory JNDI resource.
- Application ports are defined using property files. See Configuring Environment Specific Settings.
- Cortex is also configured using property files. See Cortex Configuration Files.
- Additional application configuration is contained in the database. See Configuring System Settings.
- The default application context paths are:
- The JDBC driver is packaged in the database/jdbc directory.
- Active MQ JARs are packaged in the tools/activemq/lib directory.