unixdev.net


Switch to SpeakEasy.net DSL

The Modular Manual Browser

Home Page
Manual: (OSF1-V5.1-alpha)
Page:
Section:
Apropos / Subsearch:
optional field



sys_attrs_vm(5)						      sys_attrs_vm(5)



NAME

  sys_attrs_vm - system	attributes for the vm kernel subsystem

DESCRIPTION

  This reference page describes	system attributes for the Virtual Memory (vm)
  kernel subsystem. See	sys_attrs(5) for general guidelines about changing
  system attributes.

  In the following list, an asterisk (*) precedes the names of attributes
  whose	values you can change while the	system is running. Changes to values
  of attributes	whose names are	not preceded by	an asterisk take effect	only
  when the system is rebooted.



  anon_rss_enforce
      A	value that sets	no limit (0), a	soft limit (1),	or a hard limit	(2)
      on the resident set size of a process.

      Default value: 0 (no limit)

      By default, applications can set a process-specific limit	on the number
      of pages resident	in memory by specifying	the RLIMIT_RSS resource	value
      in a setrlimit() call. However, applications are not required to limit
      the resident set size of a process and there is no system-wide default
      limit. Therefore,	the resident set size for a process is limited only
      by system	memory restrictions. If	the demand for memory exceeds the
      number of	free pages, processes with large resident set sizes are
      likely candidates	for swapping.

      The anon_rss_enforce attribute enables different levels of control over
      process set sizes	and when the pages that	a process is using in
      anonymous	memory are swapped out (blocking the process) during times of
      contention for free pages. Setting anon_rss_enforce to either 1 or 2,
      allows you to enforce a system-wide limit	on resident set	size for a
      process through the vm_rss_max_percent attribute.	 Setting
      anon_rss_enforce to 1 (a soft limit), enables finer control over pro-
      cess blocking and	paging of anonymous memory by allowing you to set the
      vm_rss_block_target and vm_rss_wakeup_target attributes.

      When anon_rss_enforce is set to 2, the resident set size for a process
      cannot exceed the	system-wide limit set by the vm_rss_max_percent
      attribute	or a process-specific limit, if	any, that is set by an
      application's setrlimit()	call. When the resident	set size exceeds
      either of	these limits, the system starts	to swap	out pages of
      anonymous	memory that the	process	is already using to keep the resident
      set size within the specified limit.

      When anon_rss_enforce is set to 1, any system-default and	process-
      specific limits on resident set size still apply and will	cause swap-
      ping to occur when exceeded. Otherwise, a	process's pages	are swapped
      out when the number of free pages	is less	than the value of the
      vm_rss_block_target attribute. The process remains blocked until the
      number of	free pages reaches the value of	the vm_rss_wakeup_target.



  * boost_pager_priority
      This attribute supports diskless systems and enables the pager to	be
      more responsive.	It functions under the following conditions:

	+  The diskless	driver is loaded and configured. Diskless system ser-
	   vices are part of the Dataless Management Services (DMS). DMS
	   enables systems to run the operating	system from a server without
	   requiring a local hard disk on each client system.

	+  The server is serving a realtime pre-emptive	kernel.

      Default value: 0 (off)

      Maximum value: 1 (on)



  * enable_yellow_zone
      A	value that enables (1) or disables (0) a soft guard page on the	pro-
      gram stack. This allows an application to	enter a	signal handler on
      stack overflows, which otherwise would cause a core dump.

      Default value: 0 (disabled)

      The enable_yellow_zone attribute is intended for use by systems pro-
      grammers who are debugging kernel	applications, such as device drivers.



  gh_chunks
      Number of	4-MB chunks of memory reserved at boot time for	shared memory
      use. This	memory cannot be used for any other purpose, nor can it	be
      returned to the system or	reclaimed when not being used. On NUMA-aware
      systems (GS80, GS160, and	GS320),	the gh_chunks attribute	affects	only
      the first	Resource Affinity Domain (RAD).	See the	entry for
      rad_gh_regions for more information.

      Default value: 0 (chunks)	(The zero value	means that use of granularity
      hints is disabled.)

      Minimum value: 0

      Maximum value: 9,223,372,036,854,775,807

      The attributes associated	with "granularity hints" (the gh_*attributes)
      are sometimes recommended	specifically for database servers. Using seg-
      mented shared memory (SSM) is the	alternative to using granularity
      hints and	is recommended for most	systems. Therefore, if the gh_chunks
      attribute	is not set to zero, the	ssm_threshold attribute	of the ipc
      subsystem	should be set to zero. If the gh_chunks	attribute is set to
      zero, the	ssm_threshold attribute	should not be set to zero.

      See your database	product	documentation and the System Configuration
      and Tuning manual	for more information about using granularity hints or
      SSM.



  gh_fail_if_no_mem
      A	value that enables (1) or disables (0) a failure return	by the shmget
      function under certain conditions	when granularity hints is in use.
      When this	attribute is set to 1, the shmget() function returns a
      failure if the requested segment size is larger than the value of	 the
      gh-min-seg-size attribute	and if there is	insufficient memory allocated
      by the gh-chunks attribute to satisfy  the request.

      Default value: 1 (enabled)



  gh_front_alloc
      A	value that specifies whether the memory	reserved for granularity
      hints is (1) or is not (0) allocated from	low physical memory
      addresses. Allocation from low physical memory addresses is useful if
      you have an odd number of	memory boards.

      Default value: 1 (allocation from	low physical memory addresses)



  gh_keep_sorted
      Specifies	whether	the memory reserved for	granularity hints is (1) or
      is not (0) sorted.

      Default value: 0 (not sorted)



  gh_min_seg_size
      Size, in bytes, of the segment in	which shared memory is allocated from
      the memory reserved for shared memory, according to the value of the
      gh-chunks	attribute.

      Default value: 8,388,608 (bytes, or 8 MB)

      Minimum value: 0

      Maximum value: 9,223,372,036,854,775,807



  kernel_stack_pages
      Number of	pages per thread that are used for stack space in kernel
      mode.

      Default value: 2 (pages per thread)

      Minimum value: 2

      Maximum value: 3

      The sysconfig command may	display	0 (zero) when the actual setting is
      2. This error will be corrected in a release following Tru64 UNIX	Ver-
      sion 5.0.

      It is strongly recommended that you not modify kernel_stack_pages
      unless directed to do so by your support representative. In the event
      of a kernel stack	not valid halt error that is caused by a kernel	stack
      overflow problem,	increasing the value of	kernel_stack_pages may work
      around the problem. This workaround will not be successful if the	error
      occurred because the stack pointer became	corrupted. In any event, a
      kernel stack not valid halt error	is always an unexpected	error that
      should be	reported to your support representative	for further investi-
      gation.



  * kstack_free_target
      Number of	freed kernel stack pages that are saved	for reuse. Above this
      limit, freed kernel stack	pages are immediately deallocated.

      Default value: 5 (pages)

      Minimum value: 0

      Maximum value: 2,147,483,647

      Deallocation of freed kernel stack pages ensures that memory is avail-
      able for other operations. However, the processor	time required for
      deallocating freed kernel	stack pages has	a negative performance impact
      that might be more noticeable on NUMA-enabled systems (GS80, GS160,
      GS320) than on other systems.  You can use the kstack_free_target	value
      to make the most appropriate tradeoff between increased memory consump-
      tion and time spent by CPUs in a purge operation.

      You can change the value of the kstack_free_target attribute while the
      system is	running.



  malloc_percpu_cache
      A	value that enables (1) or disables (0) caching of malloc memory	on a
      per CPU basis.

      Default value: 1

      Do not modify the	default	setting	for this attribute unless instructed
      to do so by support personnel or by patch	kit documentation.



  new_wire_method
      Default value: 1 (on)

      Do not modify the	default	setting	for this attribute unless instructed
      to do so by support personnel or by patch	kit documentation.



  private_cache_percent
      Percentage of the	secondary cache	that is	reserved for anonymous
      (nonshared) memory.  Increasing the cache	for anonymous memory reduces
      the cache	space available	for file-backed	memory (shared). This attri-
      bute is useful only for benchmarking.

      Default value: 0 (percent)

      Minimum value: 0

      Maximum value: 100



  rad_gh_regions[n]
      For NUMA-aware systems (GS80, GS160, and GS320), the granularity hints
      chunk size (in megabytes)	for the	Resource Affinity Domain (RAD) iden-
      tified by	n. There are 64	elements in the	attribute array,
      rad_gh_regions[0]	to rad_gh_regions[63]. Although	all elements in	the
      array are	visible	on all systems,	the kernel uses	only the element
      values corresponding to RADs that	exist on the system.  See the entry
      for the gh_chunks	attribute for general information about	granularity
      hints memory allocation.

      Default value: 0 (MB) (Granularity hints is disabled.)

      The array	of rad_gh_regions[n] attributes	replace	the gh_chunks attri-
      bute, which affects only the first or (for non-NUMA systems)  only RAD
      (rad_gh_regions[0]) supported by the system. Although gh_chunks and the
      set of rad_gh_regions attributes both specify how	much memory is mani-
      pulated through granularity hints	memory allocation, the unit of meas-
      urement for the former is	4-megabyte units whereas the unit of measure-
      ment for the latter is megabytes.	Therefore:

      rad_gh_regions[0]	= gh_chunks * 4

      Setting the gh_chunks attribute, not the rad_gh_regions[0] attribute,
      is recommended if	you want to use	granularity hints memory allocation
      on non-NUMA systems.



  replicate_user_text
      A	value that controls whether user text can or cannot be	     repli-
      cated on multiple	CPUs of	a NUMA-enabled system (GS80, GS160, GS320).
      When the value is	1, replication of user text is enabled.	When the
      value is 0, replication of user text is disabled.	This attribute is
      sometimes	used by	kernel developers when debugging software for NUMA-
      enabled systems; however,	the attribute is not for general use. (The
      value is ignored on non-NUMA systems and changing	it to 0	on NUMA	sys-
      tems might degrade performance.)

      Default value: 1

      Do not change the	value of this attribute	unless instructed to do	so by
      support personnel	or patch kit instructions.



  swapdevice
      The device partitions reserved for swapping. This	is a comma-separated
      string, for example /dev/disk/dsk0g,/dev/disk/dsk0d that can be up to
      256 bytes	in length.



  * ubc_borrowpercent
      Percentage of memory above which the UBC is only borrowing memory	from
      the virtual memory subsystem.  Paging does not occur until the UBC has
      returned all its borrowed	pages.

      Default value: 20	(percent)

      Minimum value: 0

      Maximum value: 100

      Increasing this value may	increase UBC cache effectiveness and improve
      throughput; however, the cost is a likely	degradation of system
      response time during a low memory	condition.



  ubc_ffl
      Obsolete;	currently ignored by the software.



  * ubc_kluster_cnt
      Specifies	the number of pages to consolidate before initiating an	I/O
      operation.

      Default value: 32	(pages)

      Minimum value: 0

      Maximum value: 512

      The default value	is appropriate for the vast majority of	systems.
      Raising this value may improve I/O efficiency if relatively few users
      and applications write to	only a few very	large files, and there is
      high probability that write operations affect contiguous pages. How-
      ever, the	cost is	increased time spent in	memory (and holding locks for
      a	longer length of time) while the system	determines what	state pages
      are in and which ones can	be clustered.



  * ubc_maxdirtymetadata_pcnt
      A	threshold value	that forces cleanup of AdvFS metadata that is being
      stored in	the UBC. The default setting forces return of pages contain-
      ing AdvFS	metadata when they reach 70 percent of the UBC.

      This is not a tuning parameter. Do not modify the	default	setting
      unless directed to do so by support personnel or patch kit instruc-
      tions.

      Default value: 70	(percent)

      Minimum value: 0

      Maximum value: 100



  * ubc_maxdirtywrites
      Number of	I/O operations (per second) that the virtual memory subsystem
      performs when the	number of dirty	(modified) pages in the	UBC exceeds
      the value	of the vm-ubcdirtypercent attribute.

      Default value: 5 (operations per second)

      Minimum value: 0

      Maximum value: 2,147,483,647



  * ubc_maxpercent
      Maximum percentage of physical memory that the UBC can use at one	time.

      Default value: 100 (percent)

      Minimum value: 0

      Maximum value: 100

      It is recommended	that this value	be set to a value in the range of 70
      to 80 percent. On	an overloaded system, values higher than 80 can	delay
      return of	excess UBC pages to vm and adversely affect performance.



  * ubc_minpercent
      Minimum percentage of physical memory that the UBC can use.

      Default value: 10	(percent)

      Minimum value: 0

      Maximum value: 100



  * vm_aggressive_swap
      A	value that enables (1) or disables (0) the ability of the task
      swapper to aggressively swap out idle tasks.

      Default value: 0 (disabled)

      Setting this attribute to	1 helps	prevent	a low-memory condition from
      occurring	and allows more	jobs to	be run simultaneously. However,
      interactive response times are likely to be longer on a system that is
      excessively paging and swapping.



  * vm_asyncswapbuffers
      The number of asynchronous I/O requests per swap partition that can be
      outstanding at one time.	Asynchronous swap requests are used for
      pageout operations and for prewriting modified pages.

      Default value: 4 (requests)

      Minimum value: 0

      Maximum value: 2,147,483,647



  vm_clustermap
      Size, in bytes, of the kernel cluster submap, which is used to allocate
      the scatter/gather map for clustered file	and swap I/O.

      Default value: 1,048,576 (bytes, or 1 MB)

      Minimum value: 0

      Maximum value: 922,337,203,854,775,807



  vm_clustersize
      Maximum size, in bytes, of a single scatter/gather map for a clustered
      I/O request.

      Default value: 65,536 (bytes, or 64 KB)

      Minimum value: 0

      Maximum value: 922,337,203,854,775,807



  vm_cowfaults
      Number of	times that the pages of	an anonymous object are	copy-on-write
      faulted after a fork operation but before	they are copied	as part	of
      the fork operation.

      Default value: 4 (faults)

      Minimum value: 0

      Maximum value: 2,147,483,647



  vm_csubmapsize
      Size, in bytes, of the kernel copy submap.

      Default value: 1,048,576 (bytes, or 1 MB)

      Minimum value: 0

      Maximum value: 922,337,203,854,775,807



  vm_ffl
      Obsolete;	currently ignored by the software.



  * vm_inswappedmin
      Minimum amount of	time, in seconds, that a task remains in the
      inswapped	state before it	is considered a	candidate for outswapping.

      Default value: 1 (second)

      Minimum value: 0

      Maximum value: 60



  vm_max_rdpgio_kluster
      Size, in bytes, of the largest pagein (read) cluster that	is passed to
      the swap device.

      Default value: 16,384 (bytes) (16	KB)

      Minimum value: 8192

      Maximum value: 131,072



  vm_max_wrpgio_kluster
      Size, in bytes, of the  largest pageout (write) cluster that is passed
      to the swap device.

      Default value: 32,768 (bytes) (32	KB)

      Minimum value: 8192

      Maximum value: 131,072



  vm_min_kernel_address
      Base address of the kernel's virtual address space.  The value can be
      either Oxffffffff80000000	or Oxfffffffe00000000, which sets the size of
      the kernel's virtual address space to either 2 GB	or 8 GB, respec-
      tively.

      Default value: 18,446,744,073,709,551,615	(2 to the power	of 64)

      You may need to increase the kernel's virtual address space on very
      large memory (VLM) systems (for example, systems with several gigabytes
      of physical memory and several thousand  large processes).



  * vm_page_free_hardswap
      The threshold value that stops paging. When the number of	pages on the
      free list	reaches	this value, paging stops.

      Default value: Varies, depending on physical memory size;	about 16
      times the	value of vm_page_free_target

      Minimum value: 0

      Maximum value: 2,147,483,647

      The vm_page_free_hardswap	value is computed from the
      vm_page_free_target value, which by default scales with physical memory
      size. If you change vm_page_free_target, your change affects
      vm_page_free_hardswap as well.



  * vm_page_free_min
      The threshold value that starts page swapping. When the number of	pages
      on the free page list falls below	this value, paging starts.

      Default value: 20	(pages,	or twice the amount of vm_page_free_reserved)

      Minimum value: 0

      Maximum value: 2,147,483,647



  * vm_page_free_optimal
      The threshold value that begins hard swapping. When the number of	pages
      on the free list falls below this	value for five seconds,	hard swapping
      begins.

      Default value: Automatically scaled by using this	formula:


	   vm_page_free_min + ((vm_page_free_target - vm_page_free_min)	/ 2)

      Minimum value: 0 (pages)

      Maximum value: 2,147,483,647



  * vm_page_free_reserved
      The threshold value that determines when memory is limited to
      privileged tasks.	 When the number of pages on the free page list	falls
      below this value,	only privileged	tasks can get memory.

      Default value: 10	(pages)

      Minimum value: 1

      Maximum value: 2,147,483,647



  * vm_page_free_swap
      The threshold value that begins swapping of idle tasks. When the number
      of pages on the free page	list falls below this value, idle task swap-
      ping begins.

      Default value: Automatically scaled by using this	formula:


	   vm_page_free_min + ((vm_page_free_target - vm_page_free_min)	/ 2)

      Minimum value: 0

      Maximum value: 2,147,483,647



  * vm_page_free_target
      The threshold value that stops paging, When the number of	pages on the
      free page	list reaches this value, paging	stops.

      Default value: Based on the amount of managed memory that	is available
      on the system, as	shown in the following table:


      __________________________________________________
      Available	Memory (M)   vm_page_free_target (pages)
      __________________________________________________
      Less than	512	     128
      512 to 1023	     256
      1024 to 2047	     512
      2048 to 4095	     768
      4096 and higher	     1024
      __________________________________________________

      Minimum value: 0 (pages)

      Maximum value: 2,147,483,647



  * vm_page_prewrite_target
      Maximum number of	modified UBC pages that	the vm subsystem will
      prewrite to disk if it anticipates running out of	memory.	The prewrit-
      ten pages	are the	least recently used (LRU) pages.

      Default value: vm_page_free_target * 2

      Minimum value: 0

      Maximum value: 2,147,483,647



  * vm_rss_block_target
      A	threshold number of free pages that will start swapping	of anonymous
      memory from the resident set of a	process. Paging	of anonymous memory
      starts when the number of	free pages meets or exceeds this value.	The
      process is blocked until the number of free pages	reaches	the value set
      by the vm_rss_wakeup_target attribute.

      Default value: Same as vm_page_free_optimal

      Minimum value: 0

      Maximum value: 2,147,483,647

      The default value	of the vm_rss_block_target attribute is	the same as
      the default value	of the vm_page_free_optimal attribute that controls
      the threshold value for hard swapping.

      You can increase the value of vm_rss_block_target	to start paging	of
      anonymous	memory earlier than when hard swapping occurs or decrease the
      value to delay paging of anonymous memory	beyond the point at which
      hard swapping occurs.



  * vm_rss_max_percent
      A	percentage of the total	pages of anonymous memory on the system	that
      is the system-wide limit on the resident set size	for any	process. The
      value of this attribute has an effect only if anon_rss_enforce is	set
      to 1 or 2.

      Default value: 100 (percent)

      Minimum value: 1

      Maximum value: 100

      You can decrease this percentage to enforce a system-wide	limit on the
      resident set size	for any	process. Be aware, however, that this limit
      applies to privileged, as	well as	unprivileged, processes	and will
      override a larger	resident set size that may be specified	for a process
      through the setrlimit() call.



  * vm_rss_wakeup_target
      A	threshold number of free pages that will unblock a process whose
      anonymous	memory is swapped out. The process is unblocked	when the
      number of	free pages meets this value.

      Default value: Same is vm_page_free_optimal

      Minimum value: 0

      Maximum value: 2,147,483,647

      The default value	of the vm_rss_wakeup_target attribute is the same as
      the default value	of the vm_page_free_optimal attribute that controls
      the threshold value for hard swapping.

      You can increase the value of vm_rss_wakeup_target to free more memory
      before unblocking	a process or decrease the value	to unblock the pro-
      cess sooner (with	less freed memory).



  vm_segment_cache_max
      Number of	text segments that can be cached in the	segment	cache.
      (Applies only if you enable segmentation.)

      Default value:  50 (segments)

      Minimum value: 0

      Maximum value: 8192

      The vm subsystem uses the	segment	cache to cache inactive	executables
      and shared libraries.  Because objects in	the  segment cache can be
      accessed by mapping a page table entry, this cache eliminates I/O
      delays for repeated executions and reloads.

      Reducing the number of segments in the segment cache can free memory
      and help to reduce paging	overhead. (The size of each segment depends
      on the text size of the executable or the	shared library that is being
      cached.)



  * vm_segmentation
      A	value that enables (1) or disables (0) the ability of shared regions
      of user address space to also share the page tables that map to those
      shared regions.

      Default value: 1 (enabled)

      In a TruCluster environment, this	value must be the same on all cluster
      members.



  vm_swap_eager
      Specifies	the swap allocation mode, which	can be immediate mode (1) or
      deferred mode (0).

      Default value: 1 (immediate mode)



  * vm_syncswapbuffers
      The number of synchronous	I/O requests that can be outstanding to	the
      swap partitions at one time. Synchronous swap requests are used for
      page-in operations and task swapping.

      Default value: 128 (requests)

      Minimum value: 1

      Maximum value: 2,147,483,647



  vm_syswiredpercent
      Maximum percentage of physical memory that can be	dynamically wired.
      The kernel and user processes use	this memory for	dynamically allocated
      data structures and address space, respectively.

      Default value: 80	(percent)

      Minimum value: 1

      Maximum value: 100



  *vm_troll_percent
      Enables, disables, and tunes the trolling	rate for the memory troller
      on systems supported by the memory troller.

      When enabled, the	memory troller continually reads the system's memory
      to proactively discover and handle memory	errors.	 The troll rate	is
      expressed	as a percentage	of the system's	total memory trolled per hour
      and you can change it at any time. Valid troll rate settings are:

	+  Default value: 4 percent per	hour

	   This	default	value applies if you do	not specify any	value for
	   vm_troll_percent in the /etc/sysconfigtab.  At the default troll
	   rate, each 8	kilobyte memory	page is	trolled	once every 24 hours.

	+  Disable value: 0 (zero)

	   Specify a value of 0	(zero)	to disable memory trolling.

	+  Range: 1 - 100 percent

	   Specify a value in the range	1 to 100 to set	the troll rate to a
	   percentage of memory	to troll per hour. For example,	a troll	rate
	   of 50 reads half the	total memory in	one hour. After	all memory is
	   read, the troller starts a new pass at the beginning	of memory.

	+  Accelerated trolling: 101 percent

	   Specify a value greater than	100 percent to invoke one pass
	   accelerated trolling. At this rate, all system memory is trolled
	   at a	rate of	approximately 6000 pages per second, where one page
	   equals 8 kilobytes. Trolling	is then	automatically disabled after
	   a single pass. This mode is intended	for trolling all memory
	   quickly during off peak hours.

      Low troll	rates, such as the 4 percent default, have a negligible
      impact on	system performance. Processor usage for	memory trolling
      increases	as the troll rate is increased.	Refer to memory_trolling(5)
      for additional performance information and memory	troller	usage
      instructions.



  vm_ubcbuffers
      Specifies	the number of I/O operations that can be outstanding while
      purging dirty (modified) pages from the UBC. The dirty pages are
      flushed to disk to reclaim memory.  The UBC purge	daemon will stop
      flushing dirty pages when	the number of I/Os reaches the vm_ubcbuffers
      limit or there are no more dirty pages in	the UBC. AdvFS software	does
      not use this attribute; only UFS software	uses it.

      Default value: 256 (I/Os)

      Minimum value: 0

      Maximum value: 2,147,483,647

      For systems running at capacity and on which many	interactive users are
      performing write operations to UFS file systems, users might detect
      slowed response times if many pages are flushed to disk each time	the
      UBC buffers are purged. Decreasing the value of vm_ubcbuffers causes
      shorter but more frequent	purge operations, thereby smoothing out	sys-
      tem response times. Do not, however, decrease vm_ubcbuffers to a value
      that completely disables purging of dirty	pages. One I/O for certain
      file systems might be associated with many pages because of write	clus-
      tering of	dirty pages.


				       Note

	 Changes to this attribute only	take affect when made at boot time.

	 You can also set the smoothsync_age attribute of the vfs kernel sub-
	 system	to address response-time delays	that can occur during periods
	 of intense write activity. The	smoothsync_age attribute uses a	dif-
	 ferent	metric (age of dirty pages rather than number of I/Os) to
	 balance the frequency and duration time of purge operations and
	 therefore does	not support the	ability	of UFS to flush	all dirty
	 pages for the same write operation at the same	time. However,
	 smoothsync_age	can be changed while the system	is running and is
	 used by AdvFS as well as UFS software.	See sys_attrs_vfs(5) for
	 information about the smoothsync_age attribute.





  * vm_ubcdirtypercent
      The percentage of	pages that must	be dirty (modified) before the UBC
      starts writing them to disk.

      Default value: 10	(percent)

      Minimum value: 0

      Maximum value: 100



  * vm_ubcfilemaxdirtypages
      In the context of	an application thread, the number of pages that	must
      be dirty (modified) before the UBC update	daemon starts writing them.
      This value is for	internal use only.



  vm_ubcpagesteal
      The minimum number of pages to be	available for file expansion. When
      the number of available pages falls below	this number, the UBC steals
      additional pages to anticipate the file's	expansion demands.

      Default value: 24	(file pages)

      Minimum value: 0

      Maximum value: 2,147,483,647



  * vm_ubcseqpercent
      The maximum percentage of	UBC memory that	can be used to cache a single
      file. See	vm_ubcseqstartpercent for information about controlling	when
      the UBC checks this limit.

      Default value: 10	(percent)

      Minimum value: 0

      Maximum value: 100



  * vm_ubcseqstartpercent
      A	threshold value	(a percentage of the UBC in terms of its current
      size) that determines when the UBC starts	to check the percentage	of
      UBC pages	cached for each	file object. If	the cached page	percentage
      for any file exceeds the value of	vm_ubcseqpercent, the UBC returns
      that file's UBC LRU pages	to virtual memory.

      Default value: 50	(percent)

      Minimum value: 0

      Maximum value: 100


SEE ALSO

  Commands: dxkerneltuner(8), sysconfig(8), and	sysconfigdb(8).


  Others: memory_trolling(5), sys_attrs_proc(5), and sys_attrs(5).

  System Configuration and Tuning

  System Administration