Skip to content

Conversation

@stgrosshh
Copy link

In containerized environment we cannot use server.port, when having
more than one service instances running on one host. Introduce first
container related property to build a instance id with mapped host port
to be registered in discovery

In containerized environment we cannot use server.port, when having
more than one service instances running on one host. Introduce first
container related property to build a instance id with mapped host port
to be registered in discovery
@spencergibb
Copy link
Member

Help me understand your use case. What discovery server are you using (eureka, consul, zookeeper)? You can already customize the instance id's in each technology.

For example in a eureka app in application.properties (or bootstrap.properties)

eureka.instance.instanceId=${CONTAINER_HOST_PORT:${spring.application.name}:${server.port}}

Would set the instanceId to CONTAINER_HOST_PORT if set, otherwise spring.application.name:server.port for local dev.

@stgrosshh
Copy link
Author

Yeah, that's an alternative approach, of course.
Thought it would be a possibly more convenient default though.
I was a little bit puzzled in the first place, when setting the eureka.instance.nonSecurePort, that it was
not used in the instance id. So I dogged a little bit in the code and came to the suggested approach.

At least some clarification regarding this would be great in the documentation.

@spencergibb
Copy link
Member

IdUtils was created to create a default that might be useful on a laptop. It is documented how to change the instance id http://cloud.spring.io/spring-cloud-static/Brixton.SR5/#_changing_the_eureka_instance_id.

The simplest way to manually make instance id unique would be to set spring.application.instance_id.

I feel like this is more complex than the already allowed configuration.

@spencergibb spencergibb closed this Sep 7, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants