Fix several typos

This commit is contained in:
Marc Jansen 2021-04-13 15:43:10 +02:00
parent a93a11cc80
commit d7db8b0481
35 changed files with 63 additions and 63 deletions

View File

@ -1,6 +1,6 @@
# Cloud Native GeoServer Changelog
## Relase 0.2.1, February 25, 2021
## Release 0.2.1, February 25, 2021
`v0.2.1` is a patch release adding support to configure CPU and Memory resource limits in the docker composition.
@ -16,7 +16,7 @@ given the percentage set and the memory limit imposed to the container.
A log entry is printed when each service is ready to verify the memory and cpu limits are seen by the JVM.
## Relase 0.2.0, February 11, 2021
## Release 0.2.0, February 11, 2021
`v0.2.0` code freeze by Dec 31, 2020. Internal priority shifts delayed the release until now, though the system has been deployed in production using Kubernetes since 0.1.0 by end of August 2020.

View File

@ -59,7 +59,7 @@ The following configuration properties apply to all *GeoServer* microservices (i
```
geoserver.security.enabled=true #flag to turn off geoserver security auto-configuration
geoserver.proxy-urls.enabled=true #flag to turn off proxyfing respose URL's based on gateway's provided HTTP request headers (X-Forwarded-*)
geoserver.proxy-urls.enabled=true #flag to turn off proxyfing response URL's based on gateway's provided HTTP request headers (X-Forwarded-*)
geoserver.web.resource-browser.enabed=true
geoserver.servlet.enabled=true #flag to turn off auto-configuration of geoserver servlet context
geoserver.servlet.filter.session-debug.enabled=true #flag to disable the session debug servlet filter

View File

@ -202,7 +202,7 @@ public class CatalogClientResourceStore implements ResourceStore {
boolean localIsDirectory = Type.DIRECTORY.equals(local.getType());
if (localIsDirectory && !local.delete()) {
throw new IllegalStateException(
"Unable to delte local copy of directory " + resource.path());
"Unable to delete local copy of directory " + resource.path());
}
File localDirectory = local.dir();
boolean localAndRemoteUpToDate = resource.lastmodified() == local.lastmodified();

View File

@ -4,7 +4,7 @@ Sets up the basis for an event-stream based synchronization of configuration cha
This module uses [spring-cloud-bus](https://cloud.spring.io/spring-cloud-static/spring-cloud-bus/3.0.0.M1/reference/html/) to notify all services in the cluster of catalog and geoserver configuration changes.
The remote event notification mechanism is agnostic of the transport layer, which depends on the configured [spring-cloud-streams](https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.6.RELEASE/reference/html/) Mesage Broker, which is in principle any supported [AMQP](https://www.amqp.org/) (Advanced Message Queuing Protocol) provider, Apache Kafka, Kafka Streams, Google PubSub, Azure Event Hubs, and more, depending on the `spring-cloud-streams` release and third-party binder implementations;
The remote event notification mechanism is agnostic of the transport layer, which depends on the configured [spring-cloud-streams](https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.6.RELEASE/reference/html/) Message Broker, which is in principle any supported [AMQP](https://www.amqp.org/) (Advanced Message Queuing Protocol) provider, Apache Kafka, Kafka Streams, Google PubSub, Azure Event Hubs, and more, depending on the `spring-cloud-streams` release and third-party binder implementations;
We're using [RabbitMQ](https://www.rabbitmq.com/) as the message broker in the default configuration, by means of including the `gs-cloud-catalog-event-bus-amqp` dependency in all services, which this particular module does not depend on.
@ -42,7 +42,7 @@ geoserver:
send-diff: true
```
Whether they're needed depends on the geoserver backend type and configuration, as explained bellow.
Whether they're needed depends on the geoserver backend type and configuration, as explained below.
## Usage
@ -57,9 +57,9 @@ May the configuration backend type and topology require receiving either the ful
There're basically two major backend topologies, regardless of the actual implementation (e.g. "data directory", "jdbcconfig", "catalog-service", etc):
- shared catalog: The catalog/config stores are shared among all geoserver services, hence typically the event payload would not be required;
- catalog-per-service: each service instance holds on its own copy of the catalog and config objects, hence typicall the remote configuration events should carry the object and/or diff payloads to keep the local version of the catalog and config in sync.
- catalog-per-service: each service instance holds on its own copy of the catalog and config objects, hence typically the remote configuration events should carry the object and/or diff payloads to keep the local version of the catalog and config in sync.
Note, however, the catalog-per-service approach would not be achievable in an elastically escalable deployment, where service instances of the same kind may sacle out and down to satisfy demand, or for high availability, hence the preferred default configuration is the "shared catalog" with an appropriate, scalabe, catalog backend.
Note, however, the catalog-per-service approach would not be achievable in an elastically scalable deployment, where service instances of the same kind may scale out and down to satisfy demand, or for high availability, hence the preferred default configuration is the "shared catalog" with an appropriate, scalable, catalog backend.
## Technical details
@ -67,7 +67,7 @@ Note, however, the catalog-per-service approach would not be achievable in an el
In order to favor loose coupling between components, the "event listener" approach to `Catalog` and `GeoServer` configuration is replaced by a Spring's `ApplicationEventPublisher` based one, by means of registering a single event listener on the `Catalog` and `GeoServer` application context instances, which re-publishes them to the `ApplicationEventPublisher` (generally the `ApplicationContext` itself).
This means the event subsystem relies on regular `ApplicationContext` event publishing and subscribing (e.g. trhough `@EventListener(EventType.class)` mehtod annotations), for both "local" and "remote" events.
This means the event subsystem relies on regular `ApplicationContext` event publishing and subscribing (e.g. through `@EventListener(EventType.class)` method annotations), for both "local" and "remote" events.
The "local" events published to the `ApplicationContext` are adapted to the ones in the `org.geoserver.cloud.event.catalog` and `org.geoserver.cloud.event.config` packages, which provide a nicer and homogeneous interface. For instance, `GeoServer` config listeners (`org.geoserver.config.ConfigurationListener`) do not receive "event" objects, but finer grained mehod calls, like

View File

@ -185,7 +185,7 @@ public class AbstractRemoteApplicationEventsTest {
} else if (info instanceof LoggingInfo) {
type = (Class<T>) LoggingInfo.class;
} else {
throw new IllegalArgumentException("uknown Info type: " + info);
throw new IllegalArgumentException("Unknown Info type: " + info);
}
return type;
}

View File

@ -364,7 +364,7 @@ public class LocalApplicationEventsAutoConfigurationTest {
} else if (info instanceof LoggingInfo) {
type = (Class<T>) LoggingInfo.class;
} else {
throw new IllegalArgumentException("uknown Info type: " + info);
throw new IllegalArgumentException("Unknown Info type: " + info);
}
// apply the changes to a new proxy to build the expected PropertyDiff
T proxy = ModificationProxy.create(ModificationProxy.unwrap(info), type);

View File

@ -1,3 +1,3 @@
# GeoServer Jackson bindings
Jackson-databind bindings for GeoSever Catalog and Configuration objects.
Jackson-databind bindings for GeoServer Catalog and Configuration objects.

View File

@ -80,7 +80,7 @@ public class GeometryDeserializer<T extends Geometry> extends JsonDeserializer<T
return readGeometryCollection(geometryNode, dimensions, hasM);
default:
throw new IllegalArgumentException(
"Uknown geometry node: " + geometryNode.toString());
"Unknown geometry node: " + geometryNode.toString());
}
}

View File

@ -22,7 +22,7 @@ The only two alternatives I know of are the [jdbcconfig community module](https:
Now, `CatalogFacade` shouldn't be the place where that happens, but purely as an implementation detail of `CatalogImpl`, which should give `CatalogFacade` both the object to save, and the delta properties, so that `CatalogFacade`'s implementation can apply the changes how it sees fit in order to guarantee the operation's atomicity.
The current situation is that `CatalogImpl` *relies* on `CatalogFacade` always returning a `ModificationProxy`, which further complicates realizing the initial design goal (I think) of `CatalogFacade` being a pure Data Access Object, so that implementations could be interchangeable. Instead, the lack of single responsibility among these two classes forces all alternate `CatalogFacade` implementors to replicate the logic of `DefaultCatalogFacade`.
* `ResolvingProxy` responsibility leak: a `CatalogFacade` shouldn't kwow it can get a proxy, yet on every `add(*Info)` method it tries to "resolve" object references to actual object on each overloaded version of `DefaultCatalogFacade.resolve(CatalogInfo)`, which in turn call `ResolvingProxy.resolve(getCatalog(), ...)` and `ModificationProxy.unwrap(...)` for each posible proxied reference. Now, `ResolvingProxy` usage is only an implementation detail of the default persitence mechanism, used by `XstreamPersister` at deserialization time, and `GeoServerResourcePersister` catalog listener to save a modified obejct to its xml file on disk. The only other direct user is restconfig's `LayerGroupController`, which correctly resolves all layergroup's referred layers upon a POST request, which could be an indication the restconfig API relies on the above undocumented behavior for other kinds of objects, or that it is ok to rely correct resolution of object references by its use of `XstreamPersister`, haven't checked in dept.
* `ResolvingProxy` responsibility leak: a `CatalogFacade` shouldn't know it can get a proxy, yet on every `add(*Info)` method it tries to "resolve" object references to actual object on each overloaded version of `DefaultCatalogFacade.resolve(CatalogInfo)`, which in turn call `ResolvingProxy.resolve(getCatalog(), ...)` and `ModificationProxy.unwrap(...)` for each possible proxied reference. Now, `ResolvingProxy` usage is only an implementation detail of the default persistence mechanism, used by `XstreamPersister` at deserialization time, and `GeoServerResourcePersister` catalog listener to save a modified object to its xml file on disk. The only other direct user is restconfig's `LayerGroupController`, which correctly resolves all layergroup's referred layers upon a POST request, which could be an indication the restconfig API relies on the above undocumented behavior for other kinds of objects, or that it is ok to rely correct resolution of object references by its use of `XstreamPersister`, haven't checked in dept.
* `Catalog.detach(...)`: Used to remove all proxies from the argument object, including any reference to another catalog object, and return the raw, un-proxied object.
* Breaks Catalog's information hiding, by design. It's an API method, shouldn't expose directly or indirectly implementation details. It's telling the returned objects are proxied AND live, and providing a workaround around information hiding to create unadvertised side effects if the returned live object is modified.
@ -32,9 +32,9 @@ The current situation is that `CatalogImpl` *relies* on `CatalogFacade` always r
* Event handling responsibility leak: With `ModificationProxy`'s responsibility leak to the DAO, comes event handling responsibility leak, which is split between the `Catalog` and the `CatalogFacade`. All the events for `add()` and `remove()` are fired by the catalog, whist event propagation for `save()` is delegated to the facade.
* Id handling contract: Since the `CatalogInfo` object identifiers are busines ids, and not auto-generated, the contract should be clear in that `CatalogFacade.add()` expects the id to be set, and fail if an object with that id already exists, while `Catalog.add()`'s contract should be clear in that it can get either a pre-assigned id, but will create and assign one if that's not the case.
* Id handling contract: Since the `CatalogInfo` object identifiers are business ids, and not auto-generated, the contract should be clear in that `CatalogFacade.add()` expects the id to be set, and fail if an object with that id already exists, while `Catalog.add()`'s contract should be clear in that it can get either a pre-assigned id, but will create and assign one if that's not the case.
* Business logic leak: `LayerInfo`'s `name` property is linked to its referred `ResourceInfo`'s name. This is enforced by `CatalogFacade`'s `save(LayerInfo)`, which indirectly, through `LayerInfoLookup` specialization of `CatalogInfoLookup`, takes the resource name instead of the layer's name to update its internal name-to-object hash map; and by `save(ResourceInfo)`, which saves both the argument obejct itself, and the linked `LayerInfo` by means of the specialized method `LayerInfoLookup.save(ResourceInfo)`. Instead of buried in the object model implementation (with `LayerInfoImpl.getName()` and `setName()` deferring to its internal `resource`, it should be handled as a business rule inside the `Catalog.save(LayerInfo)` and `Catalog.save(ResourceInfo)` methods, and not leak down to the DAO. A similar thing happens with `LayerInfo`'s `title` property, it defers to its resource's `title` property. Now, when saving a layer, both its resource's name and tile get effectively updated as a side effect of the `ModificationProxy.commit()`, but IMO it should be explicitly handled as a lot other business rules by `CatalogImpl`, so that `CatalogFacade` implementors can work under a cleaner contract and not having to deal with ModificationProxy at all, as mentioned above. This also breaks event handling. Given when the layer's name or title is updated, what's actually updated is the `ResourceInfo`, it would be expected that pre and post modify events would be triggered also for the resource object, and not just for the layer object, which is yet another reason to handle this artificial link explicitly as a business rule in Catalog.
* Business logic leak: `LayerInfo`'s `name` property is linked to its referred `ResourceInfo`'s name. This is enforced by `CatalogFacade`'s `save(LayerInfo)`, which indirectly, through `LayerInfoLookup` specialization of `CatalogInfoLookup`, takes the resource name instead of the layer's name to update its internal name-to-object hash map; and by `save(ResourceInfo)`, which saves both the argument object itself, and the linked `LayerInfo` by means of the specialized method `LayerInfoLookup.save(ResourceInfo)`. Instead of buried in the object model implementation (with `LayerInfoImpl.getName()` and `setName()` deferring to its internal `resource`, it should be handled as a business rule inside the `Catalog.save(LayerInfo)` and `Catalog.save(ResourceInfo)` methods, and not leak down to the DAO. A similar thing happens with `LayerInfo`'s `title` property, it defers to its resource's `title` property. Now, when saving a layer, both its resource's name and tile get effectively updated as a side effect of the `ModificationProxy.commit()`, but IMO it should be explicitly handled as a lot other business rules by `CatalogImpl`, so that `CatalogFacade` implementors can work under a cleaner contract and not having to deal with ModificationProxy at all, as mentioned above. This also breaks event handling. Given when the layer's name or title is updated, what's actually updated is the `ResourceInfo`, it would be expected that pre and post modify events would be triggered also for the resource object, and not just for the layer object, which is yet another reason to handle this artificial link explicitly as a business rule in Catalog.
* Unnecessary special cases: `LayerInfoLookup` wouldn't be necessary if not due to the above mentioned responsibility issue. `MapInfo`s internal storage is a `List` instead of a `LayerInfoLookup`. Given everything related to `MapInfo` is plain dead code, it should either be removed or `MapInfo` related methods throw an `UnsupportedOperationException`
@ -48,7 +48,7 @@ The current situation is that `CatalogImpl` *relies* on `CatalogFacade` always r
* `CatalogImpl` decorates its default catalog facade on its default constructor, by calling `setFacade(new IsolatedCatalogFacade(DefaultCatalogFacade(this)))`, but it should be `setFacade()` the one that decorates it. That's because `Catalog.setFacade()` is an API method, and the way to override the default in-memory storage by an alternate implementation, but this breaks the support for isolated workspaces in that case.
On the other hand, I can't see why the isolated workspace handling needs to be implemented as a `CatalogFacade` decorator, which is intended to be the DAO, and not as a `Catalog` decorator, which is the business object, just as so many other catalog decorators (`SecureCatalogImpl`, `LocalWorkspaceCatalog`, `AdvertisedCatalog`). Moreover, since we're at it, there's `AbstractFilteredCatalog` already, which the catalog decorator for isolated workspaces could inherit from, and at the same time `AbstractFilteredCatalog` could inherit from `AbstractCatalogDecorator` to avoid code duplication on the methods it doesn't need to override.
* `<T> IsolatedCatalogFacade.filterIsolated(CloseableIterator<T> objects, Function<T, T> filter)` breaks the streaming nature of the calling method by creating an `ArrayList<T>` and populating it. It shoud decorate the argument iterator to apply the filtering in-place:
* `<T> IsolatedCatalogFacade.filterIsolated(CloseableIterator<T> objects, Function<T, T> filter)` breaks the streaming nature of the calling method by creating an `ArrayList<T>` and populating it. It should decorate the argument iterator to apply the filtering in-place:
```java
List<T> iterable = new ArrayList<>();

View File

@ -29,7 +29,7 @@ import org.springframework.lang.Nullable;
* provides alternative, semantically clear query methods. For example, {@link
* StyleRepository#findAllByNullWorkspace()} and {@link
* StyleRepository#findAllByWorkspace(WorkspaceInfo)} are self-explanatory, no magic is involved nor
* decision making, which are handled at a higer level of abstraction ({@code CatalogFacade} and/or
* decision making, which are handled at a higher level of abstraction ({@code CatalogFacade} and/or
* {@code Catalog}).
*
* <p>All query methods that could return zero or one result, return {@link Optional}. All query

View File

@ -116,7 +116,7 @@ public class CatalogInfoTypeRegistry<I extends CatalogInfo, R> {
}
if (cm == null)
throw new IllegalArgumentException(
"Unable to determine CatalogInfo subtype from objec " + object);
"Unable to determine CatalogInfo subtype from object " + object);
return (Class<T>) cm.getInterface();
}

View File

@ -46,8 +46,8 @@ public class DefaultNamespaceInfoRules implements CatalogInfoBusinessRules<Names
}
/**
* Selects a new catalog default namespace if as the result of removing the namespace refered to
* by {@code context.getObject()}, the catalog has no default one.
* Selects a new catalog default namespace if as the result of removing the namespace referred
* to by {@code context.getObject()}, the catalog has no default one.
*/
public @Override void afterRemove(CatalogOpContext<NamespaceInfo> context) {
if (context.isSuccess()) {

View File

@ -37,8 +37,8 @@ public class DefaultWorkspaceInfoRules implements CatalogInfoBusinessRules<Works
}
/**
* Selects a new catalog default workspace if as the result of removing the workspace refered to
* by {@code context.getObject()}, the catalog has no default workspace.
* Selects a new catalog default workspace if as the result of removing the workspace referred
* to by {@code context.getObject()}, the catalog has no default workspace.
*/
public @Override void afterRemove(CatalogOpContext<WorkspaceInfo> context) {
if (context.isSuccess()) {

View File

@ -122,7 +122,7 @@ public class DefaultPropertyValuesResolver {
}
/**
* We don't want the world to be able and call this without going trough {@link
* We don't want the world to be able and call this without going through {@link
* #resolve(ResourceInfo)}
*/
private void resolve(FeatureTypeInfo featureType) {

View File

@ -797,7 +797,7 @@ public class CapabilitiesFilterSplitterFix implements FilterVisitor, ExpressionV
}
public Object visit(PropertyName expression, Object notUsed) {
// JD: use an expression to get at the attribute type intead of accessing directly
// JD: use an expression to get at the attribute type instead of accessing directly
if (parent != null && expression.evaluate(parent) == null) {
throw new IllegalArgumentException(
"Property '"

View File

@ -59,7 +59,7 @@ public interface ConfigRepository {
/** The logging configuration. */
Optional<LoggingInfo> getLogging();
/** Saves logging configuration, replacing the current one comletely. */
/** Saves logging configuration, replacing the current one completely. */
void setLogging(LoggingInfo logging);
/** Adds a service to the configuration. */
@ -99,7 +99,7 @@ public interface ConfigRepository {
* @param id The id of the service.
* @param clazz The type of the service.
* @return The service with the specified id, or {@code Optional.empty()} if no such service
* coud be found.
* could be found.
*/
<T extends ServiceInfo> Optional<T> getServiceById(String id, Class<T> clazz);

View File

@ -40,7 +40,7 @@ import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
/**
* Defaul implementation of GeoServer global and service configuration manager.
* Default implementation of GeoServer global and service configuration manager.
*
* <p>Implementation details:
*

View File

@ -1653,7 +1653,7 @@ public abstract class CatalogConformanceTest {
try {
rawCatalog.add(s2);
fail("Shoudl have failed with existing global style with same name");
fail("Should have failed with existing global style with same name");
} catch (IllegalArgumentException expected) {
}

View File

@ -13,9 +13,9 @@ geoserver:
# whether to send distributed events (i.e. acts as master). All instances receive remote events.
send-events: true
# whether to send the object (CatalogInfo/config info) as payload with the event. Set to false,
# not all possible payload types are propertly tested, and full object payload is not needed.
# not all possible payload types are properly tested, and full object payload is not needed.
send-object: ${geoserver.backend.data-directory.enabled}
# whether to send a diff of changes as payload with the event. Set to false, not all possible payload types are propertly tested nor needed.
# whether to send a diff of changes as payload with the event. Set to false, not all possible payload types are properly tested nor needed.
send-diff: ${geoserver.backend.data-directory.enabled}
backend:
# configure catalog backends and decide which backend to use on this service.

View File

@ -1,7 +1,7 @@
geoserver:
backend.catalog-service.enabled: ${backend.catalog:true}
web-ui:
# Theese are all default values, here just for reference. You can omit them and add only the ones to disable or further configure
# These are all default values, here just for reference. You can omit them and add only the ones to disable or further configure
security.enabled: true
wfs.enabled: true
wms.enabled: true

View File

@ -1,4 +1,4 @@
theme: jekyll-theme-modernist
title: Cloud Native GeoServer
description: Dockerized GeoSever micro-services
description: Dockerized GeoServer micro-services
show_downloads: false

View File

@ -26,7 +26,7 @@ The following diagram depicts the System's general architecture:
> - lines connecting a group to another component: connector applies to all services of the outgoing end, to all components of the incoming end;
> - white rectangles, components that are platform/deployment choices. For example:
> - "Event bus" could be a cloud provider's native service (event queue), or a microservice implementing a distributed event broker;
> - "Catalog/Config backend" is the software compoent used to access the catalog and configuration. Might be a microservice itself, catalog/config provider for "data directory", database, or other kind of external service store, catalog/config backend implementations;
> - "Catalog/Config backend" is the software component used to access the catalog and configuration. Might be a microservice itself, catalog/config provider for "data directory", database, or other kind of external service store, catalog/config backend implementations;
> - "Catalog/Config storage" is the storage mechanism that backs the catalog/config software component. Might be a shared "data directory" or database, a "per instance" data directory or database, and so on, depending on the available catalog/config backend implementations, and how they're configured and provisioned;
> - "Geospatial data sources" is whatever method is used to access the actual data served up by the microservices.

View File

@ -17,7 +17,7 @@ fashion.
This is the logical service name.
Since we're using a ["discovery first bootstrap"](https://docs.spring.io/spring-cloud-config/docs/2.2.7.RELEASE/reference/html/#discovery-first-bootstrap)
approach to service orchestration, other services won't use this service name to locate the discovery service, but a fixed list of service addresses need to be provided. See the "Client Configuration" section bellow for more details.
approach to service orchestration, other services won't use this service name to locate the discovery service, but a fixed list of service addresses need to be provided. See the "Client Configuration" section below for more details.
## Reference documentation

View File

@ -1,12 +1,12 @@
# Cloud Native GeoServer REST configuration API v1 service
Spring Boot/Cloud microservice that exposes GeoSever [REST API](https://docs.geoserver.org/stable/en/user/rest/).
Spring Boot/Cloud microservice that exposes GeoServer [REST API](https://docs.geoserver.org/stable/en/user/rest/).
**Docker image**: `geoservercloud/gs-cloud-restconfig-v1`.
**Docker image**: `geoservercloud/gs-cloud-restconfig-v1`.
**Service name**: `restconfig-v1`.
**Service name**: `restconfig-v1`.
Logical service name by which the [gateway-service](gateway-service.yml) will get the actual instances addresses from the [discovery-service](discovery-service.yml) and perform client-side load balancing against when interacting with the service.
Logical service name by which the [gateway-service](gateway-service.yml) will get the actual instances addresses from the [discovery-service](discovery-service.yml) and perform client-side load balancing against when interacting with the service.
## Configuration

View File

@ -4,12 +4,12 @@
The *Cloud Native GeoServer* project splits the GeoServer geospatial services and API offerings into individually deployable components of a [microservices based architecture](https://microservices.io/). This project provides clear constructive changes to GeoServer when required, based on prior experience with GeoServer clustering solutions.
*Cloud Native GeoServer* is built with GeoServer software components. Compontents are adapted and/or extended in a [functional decomposition by business capability](https://microservices.io/patterns/decomposition/decompose-by-business-capability.html). The result is each service (OWS service, the Web UI, the REST API ) is available as a self-contained, individually deployable and scalable micro-services.
*Cloud Native GeoServer* is built with GeoServer software components. Components are adapted and/or extended in a [functional decomposition by business capability](https://microservices.io/patterns/decomposition/decompose-by-business-capability.html). The result is each service (OWS service, the Web UI, the REST API ) is available as a self-contained, individually deployable and scalable micro-services.
> Q: Does that mean *GeoServer*'s `.war` is deployed several times, with each instance exposing a given "business capability"?
>
> Absolutely not, this is not a clusterd GeoSever approach and does not use war files.
>Each microservice is its own self-contained application, including only the GeoServer compoents needed for the service. Many GeoServer components provide a lot of functionality (such as different output formats). In these cases, care is taken to only load the functionality that is needed for a light-weight experience.
> Absolutely not, this is not a clustered GeoServer approach and does not use war files.
>Each microservice is its own self-contained application, including only the GeoServer components needed for the service. Many GeoServer components provide a lot of functionality (such as different output formats). In these cases, care is taken to only load the functionality that is needed for a light-weight experience.
# Contents
{:.no_toc}
@ -43,7 +43,7 @@ These containerized applications allow deployment strategies to vary from single
# Goals and benefits
* Posibility to assess and provide guidelines for proper dimensioning of services based on each one's resource needs and performance characteristics
* Possibility to assess and provide guidelines for proper dimensioning of services based on each one's resource needs and performance characteristics
* Independent evolvability of services
* Externalized, centralized configuration of services and their sub-components
* Service isolation allows the system to keep working in the event of some specific service becoming unavailable
@ -152,7 +152,7 @@ The following diagram depicts the System's general architecture:
> - lines connecting a group to another component: connector applies to all services of the outgoing end, to all components of the incoming end;
> - white rectangles, components that are platform/deployment choices. For example:
> - "Event bus" could be a cloud infrastructure provider's native service (event queue), or a microservice implementing a distributed event broker;
> - "Catalog/Config backend" is the software compoent used to access the catalog and configuration. Might be a microservice itself, catalog/config provider for "data directory", database, or other kind of external service store, catalog/config backend implementations;
> - "Catalog/Config backend" is the software component used to access the catalog and configuration. Might be a microservice itself, catalog/config provider for "data directory", database, or other kind of external service store, catalog/config backend implementations;
> - "Catalog/Config storage" is the storage mechanism that backs the catalog/config software component. Might be a shared "data directory" or database, a "per instance" data directory or database, and so on, depending on the available catalog/config backend implementations, and how they're configured and provisioned;
> - "Geospatial data sources" is whatever method is used to access the actual data served up by the microservices.
@ -169,7 +169,7 @@ Project is being deployed in production since `v0.1.0`.
[Camptocamp](https://camptocamp.com/), as the original author, is in process of donating the project to OsGeo/Geoserver.
> Q: So, is this **production ready**?
> Not at all. *Cloud Native GeoSever* is being used in production for the functionalities needed by Camptocamp's customer funding this project so far. That does not mean it's ready for general availability. For instance, these are the most oustanding issues:
> Not at all. *Cloud Native GeoServer* is being used in production for the functionalities needed by Camptocamp's customer funding this project so far. That does not mean it's ready for general availability. For instance, these are the most outstanding issues:
>
> * The **security** subsystem needs a review. If you try to change the admin password through the web-ui, there's a bug that would leave you with a broken authentication.
> * The WPS is non functional and temporarily removed from the build
@ -183,7 +183,7 @@ Follow the [Developer's Guide](develop/index.md) to learn more about the System'
# Deployment Guide
Chek out the [Deployment Guide](deploy/index.md) to learn about deployment options, configuration, and target platforms.
Check out the [Deployment Guide](deploy/index.md) to learn about deployment options, configuration, and target platforms.
<!--

View File

@ -103,7 +103,7 @@ public class BlockingCatalog extends AbstractCatalogDecorator {
else if (MapInfo.class.isAssignableFrom(type)) delegate.add((MapInfo) info);
else
throw new IllegalArgumentException(
"Uknown CatalogInfo type: " + type.getCanonicalName());
"Unknown CatalogInfo type: " + type.getCanonicalName());
return info;
}
@ -130,7 +130,7 @@ public class BlockingCatalog extends AbstractCatalogDecorator {
else if (MapInfo.class.isAssignableFrom(type)) delegate.save((MapInfo) info);
else
throw new IllegalArgumentException(
"Uknown CatalogInfo type: " + type.getCanonicalName());
"Unknown CatalogInfo type: " + type.getCanonicalName());
} catch (RuntimeException e) {
log.error("Error saving {} with patch {}", info, patch, e);
throw e;
@ -170,7 +170,7 @@ public class BlockingCatalog extends AbstractCatalogDecorator {
return type.cast(delegate.getLayerGroupByName(name));
if (StyleInfo.class.isAssignableFrom(type)) return type.cast(delegate.getStyleByName(name));
if (MapInfo.class.isAssignableFrom(type)) return type.cast(delegate.getMapByName(name));
throw new IllegalArgumentException("Uknown CatalogInfo type: " + type.getCanonicalName());
throw new IllegalArgumentException("Unknown CatalogInfo type: " + type.getCanonicalName());
}
public <C extends org.geoserver.catalog.CatalogInfo> Stream<C> query(@NonNull Query<C> query) {

View File

@ -21,7 +21,7 @@ import org.springframework.test.web.reactive.server.WebTestClient;
/**
* Configures the {@link WebTestClient} to be able of encoding and decoding {@link CatalogInfo}
* obejcts
* objects
*/
public class WebTestClientSupport implements Supplier<WebTestClient> {

View File

@ -11,7 +11,7 @@ import org.springframework.test.web.reactive.server.WebTestClient;
/**
* Configures the {@link WebTestClient} to be able of encoding and decoding {@link CatalogInfo}
* obejcts using {@link CatalogInfoXmlEncoder} and {@link CatalogInfoXmlDecoder}
* objects using {@link CatalogInfoXmlEncoder} and {@link CatalogInfoXmlDecoder}
*/
@AutoConfigureWebTestClient(timeout = "360000")
public class WebTestClientSupportConfiguration {

View File

@ -1,5 +1,5 @@
# Cloud Native GeoServer REST configuration API v1 service
Spring Boot/Cloud microservice that exposes GeoSever [REST API](https://docs.geoserver.org/stable/en/user/rest/).
Spring Boot/Cloud microservice that exposes GeoServer [REST API](https://docs.geoserver.org/stable/en/user/rest/).
Follow and the service [documentation](../../docs/develop/services/resconfig-v1-service.md) and keep it up to date.

View File

@ -51,7 +51,7 @@ Here is the full list of `web-ui` config properties in YAML format:
```yaml
geoserver:
web-ui:
# Theese are all default values, here just for reference. You can omit them and add only the ones to disable or further configure
# These are all default values, here just for reference. You can omit them and add only the ones to disable or further configure
security.enabled: true
wfs.enabled: true
wms.enabled: true

View File

@ -13,10 +13,10 @@
<dockerfile.skip>false</dockerfile.skip>
<docker.image.name>geoserver-cloud-webui</docker.image.name>
<start-class>org.geoserver.cloud.web.app.WebUIApplication</start-class>
<!-- The following properties control whether all gs-wms,gs-wfs,gs-wcs,gs-wps, and gs-gwc transitive dependencies are
excluded. Those jars need to be on the classpath even if the respective profiles are disabled, because they each provide
a configuration org.geoserver.config.ServiceInfo class whose absence may cause a com.thoughtworks.xstream.mapper.CannotResolveClassException.
The wps,wfs,wcs,wps, and gwc profiles bellow remove the wildcard character from their respective xxx_excludes property in
<!-- The following properties control whether all gs-wms,gs-wfs,gs-wcs,gs-wps, and gs-gwc transitive dependencies are
excluded. Those jars need to be on the classpath even if the respective profiles are disabled, because they each provide
a configuration org.geoserver.config.ServiceInfo class whose absence may cause a com.thoughtworks.xstream.mapper.CannotResolveClassException.
The wps,wfs,wcs,wps, and gwc profiles below remove the wildcard character from their respective xxx_excludes property in
order to restore the profiles transitive dependencies -->
<wms_excludes>*</wms_excludes>
<wfs_excludes>*</wfs_excludes>

View File

@ -19,7 +19,7 @@ Add the following dependency to each micro-service `pom.xml`:
Spring-boot's autoconfiguration SPI is used in order to automatically engage the correct `Catalog` implementation and bean wiring depending on
what's available in the class path. Hence, independently of which storage backend is used, it's only required to include this module as a dependency
and set the configuration properties as explained bellow.
and set the configuration properties as explained below.
### Configuring JDBCConfig and JDBCStore

View File

@ -78,9 +78,9 @@ class CloudJdbcConfigDatabase extends ConfigDatabase {
/**
* Cache provider that returns no-op caches, where all tests will be cache misses. We use this
* here because GeoServer's jdbcconfig's {@link ConfigDatabase} has serious concurrency issues.
* For example, if querying WMS capabilities concurrently, will allways get the exception
* bellow, since it's updating a {@link LayerInfo}'s styles list while other threads are reading
* it to produce the capabilities document.
* For example, if querying WMS capabilities concurrently, will always get the exception below,
* since it's updating a {@link LayerInfo}'s styles list while other threads are reading it to
* produce the capabilities document.
*
* <pre>
* <code>

View File

@ -22,7 +22,7 @@ import org.springframework.context.annotation.Configuration;
*
* <p>Note this configuration is intended to be used solely in WebFlux based services. All catalog
* and config backend configuration is relied upon en enabled {@link GeoServerBackendConfigurer}
* {@code @Configuration}, loaded either explicitly or, preferrably, through one of the
* {@code @Configuration}, loaded either explicitly or, preferably, through one of the
* auto-configurations provided in {@code starter-catalog-backends}
*/
@Configuration

View File

@ -26,7 +26,7 @@ import org.springframework.web.context.request.RequestContextListener;
/**
* Smoke test to check geoserver servlet context related spring beans are not loaded if the
* auto-configuration is disabled throgh {@code geoserver.servlet.enabled=false}
* auto-configuration is disabled through {@code geoserver.servlet.enabled=false}
*/
@SpringBootTest(
classes = TestConfiguration.class,