Vista uses a lot of memory


















Those services are organized into logical groups, and then a single svchost. For instance, one svchost. Another svchost. Additionally, if you are noticing very heavy CPU usage on a single svchost. The biggest problem is identifying what services are being run on a particular svchost.

If you want to see what services are being hosted by a particular svchost. You can right-click on a particular svchost. This will flip over to the Services tab, where the services running under that svchost. Or you can double-click on a svchost. Open up Services from the administrative tools section of Control Panel, or type services.

Was this reply helpful? The kernel does have a hard limit on virtual address space with comes into play on bit systems or bit systems running a bit OS.

This limit must be set no later than boot-time usually it is set at install-time and depends on how much physical memory you have installed on your system.

These techniques are not new. There are classic OS textbooks that describe these mechanisms in great detail. The fact of the matter is that physical memory is a resource. Even on systems with pervasive support for pageable kernelspace i. AIX , some portions of physical memory may never be paged to disk.

But more importantly, the system must make the best use of its resources on all levels of the storage hierarchy. For example, local disk storage should be used as a cache for network resources such as NFS volumes. If the NFS volume is large enough, the cachefs volume should expand to fill your unused and unreserved space. Obviously some space will need to remain reserved for paging, dumping the kernel if so configured , and for emergency maintenance by the superuser.

Wow did you totally miss the point. And that is what I was referring to the actual process that minimizes the use of disk cache. But in order to understand the comments I made on the article you actually have to read the article. And the very fact that you had to insert the words disk cache, to go off on a tangent about a sentence you took out of context.

Shows me you had no intention of actually arguing the point of the article. Not in 2. In addition, memory is not a cache for paging space, so any attempt to associate SuperFetch with using memory as a disk cache is simply wrong.

On the contrary, it is a virtual memory mechanism. It performs page-ins from the paging space before the virtual pages are referenced based on previously observed trends, replacing least-recently-used pages in memory with hopefully soon-to-be-used pages from paging space. In the best case scenario, it performs paging earlier than it would have otherwise occurred. In anything but the best case, it generates more paging operations than necessary.

Anyone can argue the point of the article, but not that many people actually have the knowledge to decide for themselves whether the whole premise of the article is full of crap. Memory is not a cache for paging space! When filesystem data gets cached, it persists on the backing store.

The whole premise of the article is wrong. So why should I argue it? This article is not about disk caching. Read the article, my comments were in context with the article. This is not a social forum. The comments are suppose to be focused on the article above. I read the article after you replied to my first comment. A troll has his mind made up on how the world works and offers conclusions without justification. You, on the other hand, look like a troll.

If I was talking about disk caching you topic would have totally made sense. But my reply was based on the article, which dealt with SuperFetch. I was disagreeing with that post, because Microsoft pours money by the fist fulls in to research organizations in order keep computer science advancing along and giving value to their product. You may disagree with how they give value to their product, but funding research organizations and buying other organizations that are at the top of their science where their software is lacking.

But the reason Linux gets to the goal first is because of the nature of the volatility of the kernel where anybody can patch it and post it to the internet. With out getting in to the merits of closed source vs.

Hmm, drawing a blankux. KDE is actually really using all that memory,nothing to do with caching. Soo… suppose I have 16Gb of memory. Will Vista eat all of it just for itself? Because the cache is thrown away instantly when you need it for a process. Cache is free, really. It means a small amount of CPU use to track the cache and a small amount of memory which would shrink as you have less cache and you gain faster disk accesses. Although there are still short lock-ups when some program suddenly starts using a lot of memory, they are at least an order of magnitude shorter than before.

You can make a swap file in one of the regular partitions, or you can set some RAM space aside for swap, which is a really neat trick if you have the RAM to spare. But you need it, because the RAM allocation system needs a place to put dirty pages, and that place needs to be outside the normal allocation space.

Plus, you can tweak the swappiness setting, which means that the swap can potentially end up being used exclusively for dirty pages and never for swapping out idle applications:. I think a few megs of swap in RAM and swappiness set to minimum will give you the ideal combination, NOT getting rid of swap.

Most pro-swap comments think that absolute throughput is the only thing that counts. On the desktop I care about latencies, about responsiveness.

Not under any circumstances. Actually, since I switched from 2. It helped, but very little; not nearly enough. If lower latency is your goal, reduce the niceness of X and your window manager. I renice X to and my window manager to to improve desktop responsiveness. If you compile your kernel, you can also set the latency timer frequency to HZ. There are other options in the kernel 2. You have some configuration option incorrect because you should NOT be having skipping and unresponsiveness doing a cp.

In fact, I was impressed that I could do things like three emerges at a time Gentoo while browsing the web and listening to music with no noticeable loss of performance. In that case all millions of Ubuntu desktop users running with default settings have their configuration options incorrect, too. It is designed very specifically to help interactivity and gui responsiveness.

Sounds good. I had a similar problem too and it turned out DMA was disabled. I agree, but they do. Besides, I think s-ata drives always use dma. My 3 other disks perform similarly. And yes, I tried various experimental pre-alpha patches on various distributions and there was nothing wrong with my hardware or DMA settings.

Eventually, the drivers got better, and the problem went away. This reasoning is a little simplistic. They should both be evicted from physical memory on the basis of how recently they were last referenced. This comment shows quite clearly that you think throughput is above everything else as does Torvalds and Morton.

I do like that I can fiddle with a lot in linux, but I hate that I have to. Unpredictable process starvation is one reason why I stopped using Windows. One of the primary design considerations for the 2. Some server admins still use 2. No need to recompile the kernel.

All of these should be configured as modules in your Ubuntu kernel. Report abuse. Details required :. Cancel Submit. Rick Rogers. Hi, Actually, that sounds about right in a normal installation. You can use a tool like Process Explorer link below to see what's actually being run but again, what you are seeing is normal in a Vista installation.



0コメント

  • 1000 / 1000