如下面的配置,是10天(360h)后过期
<backing-map-scheme>
<local-scheme>
<expiry-delay>360h</expiry-delay>
</local-scheme>
</backing-map-scheme>
更多信息可参考:
http://coherence.oracle.com/display/COH34UG/local-scheme#local-scheme-expirydelay
<expiry-delay>0</expiry-delay>就是永不过期,默认配置就是永不过期
更多信息可参考:
http://coherence.oracle.com/display/COH34UG/local-scheme#local-scheme-expirydelay
可以,配置不同的SCHEMA,对不同的SCHEMA分别配置过期策略
http://coherence.oracle.com/display/COH34UG/local-scheme#local-scheme-expirydelay
file:///E:/oracle/coherence/document/E18686_01/coh.37/e18691/usehibernateascoh.htm#CEGFEFJH
http://raymondhekk.iteye.com/blog/252817
http://wiki.tangosol.com/display/COH33UG/Sample+CacheStore
如果是简单类型建议采用简单对象,如javaString
如果是复杂对象,建议采用POF对象,不采用JAVA对象,因为POF对象在COHERENCE中占用的空间比JAVA对象小很多。
不行
Coherenceprovides the ability to search for cache entries that meet a given set ofcriteria. The result set may be sorted if desired. Queries are evaluated withRead Committed isolation.
It should be noted thatqueries apply only to currently cached data (and will not use the CacheLoaderinterface to retrieve additional data that may satisfy the query). Thus, thedataset should be loaded entirely into cache before queries are performed. Incases where the dataset is too large to fit into available memory, it may bepossible to restrict the cache contents along a specific dimension (e.g. date)and manually switch between cache queries and database queries based on thestructure of the query. For maintainability, this is usually best implementedinside a cache-aware data access object (DAO).
Indexingrequires the ability to extract attributes on each Partitioned cache node; inthe case of dedicated CacheServer instances, this implies (usually) thatapplication classes must be installed in the CacheServer classpath.
ForLocal and Replicated caches, queries are evaluated locally against unindexeddata. For Partitioned caches, queries are performed in parallel across thecluster, using indexes if available. Coherence includes a Cost-Based Optimizer(CBO). Access to unindexed attributes requires object deserialization (thoughindexing on other attributes can reduce the number of objects that must beevaluated).
querythe db for the keys, then use them to pull the data into the cache.
This is a pattern that we see used fairly often, particularly for when the fullset of data is in the database and only part of that same set of data is in thecache.
可以做查询结果分页:
http://www.exforsys.com/reviews/oracle-coherence/oracle-coherence-paging-query-results.html
<caching-schemes>
<distributed-scheme>
<scheme-name>ExamplesPartitionedPofScheme</scheme-name>
<service-name>PartitionedPofCache</service-name>
<serializer>
<instance>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>vechicle-pof-config.xml</param-value>
</init-param>
</init-params>
</instance>
</serializer>
<backup-count>1</backup-count>
<backing-map-scheme>
<external-scheme>
<nio-memory-manager/>
<high-units>4000</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1048576</unit-factor>
</external-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
采用NIO,INDEX, primary key数据还是放在JVM HEAP中
Ifusing off-heap data storage (with close to zero effect on the heap) and youneed a lot of indexes for your queries (that together with the primary key mustbe located on the heap) this may cap how much data each JVM can handle! Alsonote that even if using a overflow map it would not protect you against theheap to fill up with key and index data so this is always an issue at somepoint!
可以从JCONSOLE的下面这个地方看
I got strange behaviour of refresh aheadfunctionality. Here is my configuration:
<cache-config>
<defaults>
<serializer>pof</serializer>
<socket-providersystem-property="tangosol.coherence.socketprovider"/>
</defaults>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>sample</cache-name>
<scheme-name>extend-near-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<near-scheme>
<scheme-name>extend-near-distributed</scheme-name>
<front-scheme>
<local-scheme>
<high-units>20000</high-units>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>all</invalidation-strategy>
</near-scheme>
<distributed-scheme>
<scheme-name>distributed</scheme-name>
<service-name>sample</service-name>
<thread-count>20</thread-count>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<local-scheme>
<expiry-delay>10s</expiry-delay>
</local-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>
com.sample.CustomCacheStore
</class-name>
</class-scheme>
</cachestore-scheme>
<refresh-ahead-factor>0.5</refresh-ahead-factor>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
andif I request my service with a period of 6s (10s*0.5) seconds everything isfine. I have no delaying in response(except for the first time), but if ichange a period to 3 seconds for example, then i start getting delays every 10seconds. I have no idea why it is happening. It looks like if i request myservice before expectable period (from 5 to 10 seconds) asynchronous loadingdoesn't happen even if after that i request it again. Is there any explanationof it and how can i bypass this behaviour?
The problem hasbeen solved. The reason why i've got such a situation is that front-schemedidn't notify the back-scheme because of the same expiration time. In a fewwords, to use refresh-ahead functionality with near cache you have to setexpiration time of front-scheme equal to soft-expiration time(in that case itwill be 10s*0.5).
