You are not logged in.
Hello,
Thunar using 100% of CPU.
I am using XFCE for more than 10 years, mostly with Debian, but lately with Sparky (bleeding edge) and Opensuse (15.3: LTS).
And in every one of this distros the problem remains.
I've tried the usual solutions: disable thumbnails, etc, but still the problem arises everyday multiple times:
- I have a large disk with photos and videos of my kids
- and when I open a directory of that disk with Thunar
- in about some seconds or minutes
- the CPU goes to 100%
- and then I have to kill Thunar
- and open the directory again
- and the same thing happens
- I am tired of this
- and seriously thinking about changing from XFCE to another DE
- but, I like XFCE a lot
- and I am avoiding this move
Has somebody solve this problem for good?
Last edited by jack_the_pirate (2021-11-12 08:28:58)
Queen - Megadeth - Metallica - 80's
Offline
Is it the thunar process that goes to 100%? Or maybe the tumblerd process?
Also, is there anything output to your ~/.xsession-errors file when this happens?
Please remember to mark your thread [SOLVED] to make it easier for others to find
--- How To Ask For Help | FAQ | Developer Wiki | Community | Contribute ---
Offline
I'll bet it's tumblerd. for some years now Ive had to have a launcher on my panel that runs "killall tumblerd".
Offline
Hi,
As we are on a weekend, yesterday I spent some hours with many tests and I have found this:
It's not tumblerd, as you can see by these two screenshots:
https://imgbox.com/EITwUW0Y
https://imgbox.com/jz9wcVBM
I believe the problem is the plugin Directory Menu, but only when certain conditions are present.
I have three panels:
1 - horizontal panel with windows buttons, clock, etc
2 - vertical panel with many shortcuts and many Directory Menu entries
3 - a second vertical panel also with many shortcuts and many Directory Menu entries
Conditions for happening of Thunar high CPU:
1 - with Thunar (either using Directory Menu or not), enter a directory without write permissions (example: a different directory than Desktop or home folder, like /etc or /opt).
2 - then, with the plugin Directory Menu, enter a directory inside /my-data-partition/folder-1, as for example /my-data-partition/folder-1/folder-2.
3 - note: /my-data-partition is another ext4 partition (data partition) in my operative system disk.
4 - if I enter /my-data-partition/folder-1/folder-2 without using the plugin Directory Menu, the CPU doesn't go high and keeps normal.
5 - if I navigate away from the directory /my-data-partition/folder-1/folder-2 (even if I keep inside the current thunar window) and, for example, go to /my-data-partition/folder-1, the CPU stops from being at high utilization and returns to normal.
6 - curiously, if in first place I open a directory inside /my-data-partition/folder-1/folder-2 (wether or not using a Directory Menu entry in the panel) and then open it again (also wether or not using the plugin Directory Menu), the cpu doesn't go high and keeps normal.
7 - /my-data-partition/folder-1 has 777 permissions
8 - curiously, this doesn't happen in other directories, as for example: /my-data-partition/folder-3 or /my-data-partition/folder-3/folder-4 or /my-data-partition/folder-5 or /my-data-partition/folder-5/folder-6; it only happens in /my-data-partition/folder-1/folder-2; I couldn't find why, but maybe is because it has many files inside it (in some subfolders there are some are big video files and many photos, although as I said above it has 777 permissions.
I don't know if this is related to this bug, but I am using Thunar 1.8.15 and it was marked as solved in versions 1.6.16 and 1.8.3:
https://bugzilla.xfce.org/show_bug.cgi?id=14900
Point 4 above gave me a dirty workaround:
I just keep a Thunar window open (either using Directory Menu or not) in directory /my-data-partition/folder-1/folder-2 in another Workspace and this prevents the CPU from going because of Thunar.
This happened to me with Debian, CentOS, Sparky Linux, and now with OpenSuse, all with XFCE and Directory Menu (although these tests I made were only in OpenSuse 15.3 and Thunar 1.8.15).
So, I have two questions:
A - can somebody fix Directory Menu?
B - is there a Directory Menu alternative?
Queen - Megadeth - Metallica - 80's
Offline
it only happens in /my-data-partition/folder-1/folder-2; I couldn't find why, but maybe is because it has many files inside it (in some subfolders there are some are big video files and many photos, although as I said above it has 777 permissions.
This is why I think its related to tumblerd/thumbnailing.
I can't seem to replicate this, but I don't have a directory with a large number of images. If you can replicate this, then go through it again and when the cpu spikes, run the following command in a terminal window:
pkill tumblerd
...and note whether there is an immediate drop in Thunar CPU usage.
Please remember to mark your thread [SOLVED] to make it easier for others to find
--- How To Ask For Help | FAQ | Developer Wiki | Community | Contribute ---
Offline
Hi,
It is impossible to be tumblerd because I have uninstalled tumblerd some months ago.
If you take a look at my previous answer and also to this screenshots, you will see what is happening.
1 - just opening System Monitor
https://imgbox.com/bAcCGq1Z
2 - running "which tumblerd" gives nothing, because I have uninstalled tumblerd some months ago
https://imgbox.com/FkVgBRmx
3 - open directory /etc
https://imgbox.com/XNgkr73d
4 - open a directory inside /my-data-partition/folder-1 (/my-data-partition/folder-1/various/)
https://imgbox.com/6CzErwui
5 - running "pkill tumblerd" naturally gives no output
https://imgbox.com/gUSIINkU
6 - CPU continues very high
https://imgbox.com/STKgHoPE
7 - System Monitor ordered by CPU percentage usage shows us that Thunar is the process causing the high CPU
https://imgbox.com/Aju6gRKP
8 - System Monitor ordered by Process Name shows us all the running processes (Thunar is high, and there is no tumblerd)
https://imgbox.com/3qo9PFB5
9 - System Monitor ordered by Process Name page 2
https://imgbox.com/mN6svVdy
Queen - Megadeth - Metallica - 80's
Offline
It is impossible to be tumblerd because I have uninstalled tumblerd some months ago.
In that case, it can't be tumblerd. There is also this bug report which looks possibly related.
Please remember to mark your thread [SOLVED] to make it easier for others to find
--- How To Ask For Help | FAQ | Developer Wiki | Community | Contribute ---
Offline
There are also some odd causes from hardware and/or source file corruption. This can be hard to determine, but thunar could be waiting on an i/o thrash. With kernels >5.2ish you could watch "pressure" at /proc/pressure/io. Make a genomon or something to cat the file. Normal operations should be zero.
Otherwise, I have had mpeg files (recording dvb) hang things, and usually timeout. This can show in this 'io' metric.
$ cat /proc/pressure/io
some avg10=0.00 avg60=0.00 avg300=0.00 total=64205479
full avg10=0.00 avg60=0.00 avg300=0.00 total=52643890
Offline
I had similar problem with Thunar 4.15. I always have a lot of opened Thunar windows, some directories have a lot of files inside and I also use symlinks to directories (I even have directory A, which contains symlink to directory B, which contains symlink to directory A). BTW: I have a script, which goes through all Thunar windows and saves their opened directory, window geometry and workspace into a configuration file. And then it can open all the windows again.
I noticed Thunar (with companion process gvfs-metadata) used 30-60% of CPU and it happened quite often right after reboot (after all its windows are opened), sometimes once or twice in a day.
Usual help was killing Thunar and starting it again.
Sometimes when I restarted Thunar with the same windows, the problem went back immediately - so I assumed that it is somehow related to my directories (and thus it is not easy to replicate this behaviour somewhere else so I did not ask for help here).
I searched on the Internet for help (and followed advice like: "kill gvfs-metadata, remove some related directory") and nothing helped.
Out of despair I updated Thunar from this PPA (and then removed the PPA): https://launchpad.net/~xubuntu-dev/+arc … perimental
sudo add-apt-repository ppa:xubuntu-dev/experimental
sudo apt install thunar
sudo add-apt-repository --remove ppa:xubuntu-dev/experimental
and after couple of days it seems to be OK. This new Thunar has its own peculiarities, but the annoying CPU consumption stopped.
Offline
I had similar problem with Thunar 4.15. I always have a lot of opened Thunar windows, some directories have a lot of files inside and I also use symlinks to directories (I even have directory A, which contains symlink to directory B, which contains symlink to directory A). BTW: I have a script, which goes through all Thunar windows and saves their opened directory, window geometry and workspace into a configuration file. And then it can open all the windows again.
I noticed Thunar (with companion process gvfs-metadata) used 30-60% of CPU and it happened quite often right after reboot (after all its windows are opened), sometimes once or twice in a day.
Usual help was killing Thunar and starting it again.
Sometimes when I restarted Thunar with the same windows, the problem went back immediately - so I assumed that it is somehow related to my directories (and thus it is not easy to replicate this behaviour somewhere else so I did not ask for help here).
I searched on the Internet for help (and followed advice like: "kill gvfs-metadata, remove some related directory") and nothing helped.Out of despair I updated Thunar from this PPA (and then removed the PPA): https://launchpad.net/~xubuntu-dev/+arc … perimental
sudo add-apt-repository ppa:xubuntu-dev/experimental sudo apt install thunar sudo add-apt-repository --remove ppa:xubuntu-dev/experimental
and after couple of days it seems to be OK. This new Thunar has its own peculiarities, but the annoying CPU consumption stopped.
Thank you!
By the way, can you share here your script?
Queen - Megadeth - Metallica - 80's
Offline
Thank you!
By the way, can you share here your script?
You are welcome, I am glad that it helped.
Well, my scripts are not well written, but they help me a lot. So you can try them, rewrite them or just inspire yourself for creating your own scripts. They are quite long, so I had to upload them on an external server, you can download them:
thunar_folders - main script,
mouse_switch - companion script, which is used to switch mouse off (and on) temporarily, while thunar_folders operates with Thunar windows (while it is saving current windows).
Those scripts use a lot of command line tools like: wmctrl, xdotool, zenity, xinput so it is highly unlikely they will work out of the box. Please check variable settings at the beginning of each script (=they need to be configured).
I am sure someone else could write better scripts, more efficient and elegant, I am still learning...
Offline
jack_the_pirate wrote:Thank you!
By the way, can you share here your script?You are welcome, I am glad that it helped.
Well, my scripts are not well written, but they help me a lot. So you can try them, rewrite them or just inspire yourself for creating your own scripts. They are quite long, so I had to upload them on an external server, you can download them:
thunar_folders - main script,
mouse_switch - companion script, which is used to switch mouse off (and on) temporarily, while thunar_folders operates with Thunar windows (while it is saving current windows).
Those scripts use a lot of command line tools like: wmctrl, xdotool, zenity, xinput so it is highly unlikely they will work out of the box. Please check variable settings at the beginning of each script (=they need to be configured).
I am sure someone else could write better scripts, more efficient and elegant, I am still learning...
Thank you!
And don't worry about the scripts not being perfect.
It's better than nothing.
I have lot's of scripts I've created myself and I believe most of them have mistakes or could be better.
Join the club
Queen - Megadeth - Metallica - 80's
Offline
Thank you!
And don't worry about the scripts not being perfect.
It's better than nothing.
I have lot's of scripts I've created myself and I believe most of them have mistakes or could be better.
Join the club
Thank you.
If you have any questions or notes what to improve, please let me know.
Offline
Which kind of disk is it?
I suspect the culprit might be Fuse...
Debian ~ Devuan & FreeBSD + XFCE = <3
Offline
Which kind of disk is it?
I suspect the culprit might be Fuse...
I am not sure if I understand your question...
From /etc/fstab:
I use 1 big SSD with ext4 partitions, 3 HDD with ext4 partitions:
UUID=9f6fe56b-c553-4759-9caf-133fd42f46 /mnt/XXXX ext4 defaults,x-gvfs-hide,comment=gvfs-hide 0 0
+ couple of binds like this:
/mnt/XXXX/Apps /media/Users/Apps none bind,x-gvfs-hide,comment=gvfs-hide 0 0
Offline
Which kind of disk is it?
It's a 3 Terabyte 3.5" hard drive
I suspect the culprit might be Fuse...
Could you elaborate on this and possible solutions?
Last edited by jack_the_pirate (2022-02-11 13:05:14)
Queen - Megadeth - Metallica - 80's
Offline
Sorry guys, I made confusion, I was convinced those drives were external USB ones, hence I though that FUSE might be the issue. Every time I use fuse to load some samba folder or open and external drive the CPU explodes because FUSE...
Anyway I use MC when I have to copy/move a lot of files or big files...
Debian ~ Devuan & FreeBSD + XFCE = <3
Offline
I have the same problem. In my eyes, it often happens when viewing a folder where files are listed which are changing. for example an ongoing download, or a media file which is still growing (output of ffmpeg). I think Thunar tries to get the mime type of such a file and fails doing this, maybe a race condition. But it could also be caused by a number of inotify events. Also copying a large file into a folder can cause this.
Offline
There is a long standing bug report about this with some recent-ish work.
Welcome to the forums.
Please remember to mark your thread [SOLVED] to make it easier for others to find
--- How To Ask For Help | FAQ | Developer Wiki | Community | Contribute ---
Offline
I have the same problem. In my eyes, it often happens when viewing a folder where files are listed which are changing. for example an ongoing download, or a media file which is still growing (output of ffmpeg). I think Thunar tries to get the mime type of such a file and fails doing this, maybe a race condition. But it could also be caused by a number of inotify events. Also copying a large file into a folder can cause this.
I'm firmly on the side that this is subsystem dependent, ie slow computers.
Even my very old 1.6.11 Thunar has no issue manipulating with simultaneous input streams (up to 4) while dumping GB's to or from with drag-n-drop or CA initiated scp's etc. The cpu usage stays sub 10%. This is without thumbnailer.
On a newer systems with newer thunars and with thumbnailers there is some extra cpu action but still within 10%. One big box uses a 64GB tmpfs where 10-15 video streams may be active and a avidemux save still happens at 30-50k fps. I see 1-2 second pauses in the interface every so often, never actually interrupting anything.
Offline
top - 01:36:10 up 3:17, 1 user, load average: 11.59, 10.50, 10.18
Tasks: 468 total, 4 running, 464 sleeping, 0 stopped, 0 zombie
%Cpu(s): 52.9 us, 25.0 sy, 0.0 ni, 19.1 id, 2.9 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 31979.8 total, 246.1 free, 5651.8 used, 26081.9 buff/cache
MiB Swap: 4096.0 total, 1068.5 free, 3027.5 used. 25074.1 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3328 daniel 20 0 2797600 507680 26364 R 462.5 1.6 413:17.56 Thunar
10570 daniel 20 0 182216 22388 6132 R 75.0 0.1 67:44.86 gvfsd-metadata
This is what it looks very regular on my machine. It starts when opening a folder which contains image, video or audio files, or files which change regularly, like downloads which are still in progress. this might also be one trigger for this: "broken" or incomplete files where something goes wrong while fetching metadata. sometimes the files are also big, multiple gigabytes in size.
As soon the cpu consuming starts, even simple file operations like moving or deleting a single file get queued and are sometimes executed hours later - or never.
My filesystem has one nfs mount (nfs4), all other filesystems are ext4 and the rootfs is btrfs. the nfs folder contains a lot of videos, so it is often the trigger, but not always.
$ ls -l /proc/10570/fd/ /proc/3328/fd/
/proc/10570/fd/:
total 0
lr-x------ 1 user user 64 Jun 2 01:17 0 -> /dev/null
lrwx------ 1 user user 64 Jun 2 01:17 1 -> 'socket:[61199]'
lr-x------ 1 user user 64 Jun 2 01:17 10 -> /home/user/.local/share/gvfs-metadata/home
lrwx------ 1 user user 64 Jun 2 01:17 11 -> /home/user/.local/share/gvfs-metadata/home-20a72a89.log
lrwx------ 1 user user 64 Jun 2 01:17 2 -> 'socket:[61199]'
lrwx------ 1 user user 64 Jun 1 22:57 3 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:17 4 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:17 5 -> 'socket:[61202]'
lrwx------ 1 user user 64 Jun 2 01:17 6 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:17 7 -> 'socket:[61203]'
lr-x------ 1 user user 64 Jun 2 01:17 8 -> /home/user/.local/share/gvfs-metadata/uuid-e75257c2-48b6-4a27-b958-648f4ebd090e
lrwx------ 1 user user 64 Jun 2 01:17 9 -> /home/user/.local/share/gvfs-metadata/uuid-e75257c2-48b6-4a27-b958-648f4ebd090e-eceea02b.log
/proc/3328/fd/:
total 0
lr-x------ 1 user user 64 Jun 2 01:18 0 -> /dev/null
l-wx------ 1 user user 64 Jun 2 01:18 1 -> /dev/null
lrwx------ 1 user user 64 Jun 2 01:18 10 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:18 11 -> 'socket:[36289]'
lr-x------ 1 user user 64 Jun 2 01:18 12 -> /home/user/.local/share/gvfs-metadata/uuid-7a7e6033-b237-44ab-ba74-5a01fc4539b8
lr-x------ 1 user user 64 Jun 2 01:18 13 -> /proc/3328/mountinfo
lr-x------ 1 user user 64 Jun 2 01:18 14 -> /home/user/.local/share/gvfs-metadata/uuid-e75257c2-48b6-4a27-b958-648f4ebd090e
lr-x------ 1 user user 64 Jun 2 01:18 15 -> /home/user/.local/share/gvfs-metadata/uuid-e75257c2-48b6-4a27-b958-648f4ebd090e-eceea02b.log
lrwx------ 1 user user 64 Jun 2 01:18 16 -> 'socket:[60118]'
lr-x------ 1 user user 64 Jun 2 01:18 17 -> /home/user/.local/share/gvfs-metadata/computer:
lr-x------ 1 user user 64 Jun 2 01:18 18 -> /home/user/.local/share/gvfs-metadata/computer:-4044a9df.log
lr-x------ 1 user user 64 Jun 2 01:19 19 -> /daten/a/b/c
l-wx------ 1 user user 64 Jun 2 01:18 2 -> /home/user/.xsession-errors
lrwx------ 1 user user 64 Jun 1 23:53 20 -> 'socket:[181814]'
lrwx------ 1 user user 64 Jun 2 01:18 21 -> 'socket:[60119]'
lr-x------ 1 user user 64 Jun 2 01:18 22 -> /home/user/.local/share/gvfs-metadata/trash:
lr-x------ 1 user user 64 Jun 2 01:18 23 -> /home/user/.local/share/gvfs-metadata/trash:-309f76b9.log
lr-x------ 1 user user 64 Jun 2 01:18 24 -> /home/user/.local/share/gvfs-metadata/root
lr-x------ 1 user user 64 Jun 2 01:18 25 -> /home/user/.local/share/gvfs-metadata/root-cfbb2f53.log
lrwx------ 1 user user 64 Jun 2 01:18 26 -> 'socket:[60123]'
lr-x------ 1 user user 64 Jun 2 01:18 27 -> /home/user/.local/share/gvfs-metadata/network:
lr-x------ 1 user user 64 Jun 2 01:18 28 -> /home/user/.local/share/gvfs-metadata/network:-0a1d78b4.log
lr-x------ 1 user user 64 Jun 2 01:18 29 -> anon_inode:inotify
lrwx------ 1 user user 64 Jun 2 01:18 3 -> 'anon_inode:[eventfd]'
lr-x------ 1 user user 64 Jun 2 01:18 30 -> /home/user/.local/share/gvfs-metadata/uuid-c0a69317-16e4-43d1-b2ab-c1d13f6240b7
lr-x------ 1 user user 64 Jun 2 01:18 31 -> /home/user/.local/share/gvfs-metadata/uuid-cb9587aa-8097-4b45-8bbb-ae48a3e9f2b7
lrwx------ 1 user user 64 Jun 2 01:18 32 -> 'socket:[70701]'
lr-x------ 1 user user 64 Jun 2 01:18 33 -> /home/user/.local/share/gvfs-metadata/uuid-4f475892-0c07-4d81-9de3-508b5a3f2cd0
lr-x------ 1 user user 64 Jun 2 01:18 34 -> /home/user/.local/share/gvfs-metadata/uuid-4f475892-0c07-4d81-9de3-508b5a3f2cd0-848640ff.log
lr-x------ 1 user user 64 Jun 2 01:18 35 -> /home/user/.local/share/gvfs-metadata/uuid-c0a69317-16e4-43d1-b2ab-c1d13f6240b7-19bc4c63.log
lr-x------ 1 user user 64 Jun 2 01:18 36 -> /home/user/.local/share/gvfs-metadata/uuid-d52f2b05-dbd7-4f0b-abc6-99cc3433a6f3
lr-x------ 1 user user 64 Jun 2 01:18 37 -> /home/user/.local/share/gvfs-metadata/uuid-fd8349f1-1367-4551-86a7-2a20becd88ec
lr-x------ 1 user user 64 Jun 2 01:18 38 -> /home/user/.local/share/gvfs-metadata/uuid-7a7e6033-b237-44ab-ba74-5a01fc4539b8-77743172.log
lr-x------ 1 user user 64 Jun 2 01:18 39 -> /home/user/.local/share/gvfs-metadata/uuid-fd8349f1-1367-4551-86a7-2a20becd88ec-2a5f96f8.log
lrwx------ 1 user user 64 Jun 2 01:18 4 -> 'anon_inode:[eventfd]'
lr-x------ 1 user user 64 Jun 2 01:18 40 -> /home/user/.local/share/gvfs-metadata/uuid-40b41595-259e-475a-bbe6-885cbbae1d09
lr-x------ 1 user user 64 Jun 2 01:18 41 -> /home/user/.local/share/gvfs-metadata/uuid-cb9587aa-8097-4b45-8bbb-ae48a3e9f2b7-13470a19.log
lr-x------ 1 user user 64 Jun 2 01:18 42 -> /home/user/.local/share/gvfs-metadata/uuid-40b41595-259e-475a-bbe6-885cbbae1d09-1fc319fc.log
lr-x------ 1 user user 64 Jun 2 01:18 43 -> /home/user/.local/share/gvfs-metadata/uuid-d52f2b05-dbd7-4f0b-abc6-99cc3433a6f3-e941401d.log
lr-x------ 1 user user 64 Jun 2 01:18 44 -> /home/user/.local/share/gvfs-metadata/uuid-205309b3-146c-4431-8b7a-78621652cbc5
lr-x------ 1 user user 64 Jun 2 01:18 45 -> /home/user/.local/share/gvfs-metadata/home
lr-x------ 1 user user 64 Jun 2 01:18 46 -> /home/user/.local/share/gvfs-metadata/uuid-6841cab2-07a6-4849-bcb1-8e7c26b8f643
lr-x------ 1 user user 64 Jun 2 01:18 47 -> /home/user/.local/share/gvfs-metadata/uuid-6841cab2-07a6-4849-bcb1-8e7c26b8f643-7b5c126a.log
lr-x------ 1 user user 64 Jun 2 01:18 48 -> /home/user/.local/share/gvfs-metadata/uuid-205309b3-146c-4431-8b7a-78621652cbc5-e5b98181.log
lr-x------ 1 user user 64 Jun 2 01:18 49 -> /home/user/.local/share/gvfs-metadata/home-20a72a89.log
lrwx------ 1 user user 64 Jun 2 01:18 5 -> 'socket:[33268]'
lr-x------ 1 user user 64 Jun 2 01:19 50 -> /daten/a/b/c
lrwx------ 1 user user 64 Jun 2 01:18 51 -> 'socket:[186623]'
lr-x------ 1 user user 64 Jun 2 01:19 52 -> /daten/a/b/c
lrwx------ 1 user user 64 Jun 2 01:18 53 -> 'socket:[186625]'
lrwx------ 1 user user 64 Jun 2 01:18 54 -> 'socket:[188793]'
lr-x------ 1 user user 64 Jun 2 01:19 55 -> /daten/a/b/c
lr-x------ 1 user user 64 Jun 2 01:18 56 -> /daten/a/b/c
lrwx------ 1 user user 64 Jun 2 01:19 57 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:18 58 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:19 59 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:18 6 -> 'anon_inode:[eventfd]'
lrwx------ 1 user user 64 Jun 2 01:19 60 -> 'anon_inode:[eventfd]'
lr-x------ 1 user user 64 Jun 2 01:19 61 -> /daten/a/b/c
lrwx------ 1 user user 64 Jun 2 01:18 62 -> 'socket:[185558]'
lr-x------ 1 user user 64 Jun 2 01:19 63 -> /daten/a/b/c
lrwx------ 1 user user 64 Jun 2 01:19 64 -> 'anon_inode:[eventfd]'
lr-x------ 1 user user 64 Jun 2 01:19 65 -> /daten/a/b/c
lr-x------ 1 user user 64 Jun 2 01:19 66 -> /proc/3328/mountinfo
lrwx------ 1 user user 64 Jun 2 01:18 7 -> 'socket:[36284]'
lrwx------ 1 user user 64 Jun 2 01:18 8 -> 'socket:[36286]'
lrwx------ 1 user user 64 Jun 2 01:18 9 -> 'socket:[36288]'
The mentioned directory /daten/a/b/c (anonymized name) contains 8 subdirectories with a total size of 368M, but no files itself.
I would say it's a different one every time, but have to re-check tomorrow
All the logfiles contain binary data, so nothing I can use.
10570 is the gvfsd-metadata process, it's open files stay constant
3328 is Thunar. it changes its open files in high speed, multple times per second. But the folder is always the same. today.
My system doesn't feel slow, the CPU is ~3 years old. But from time to time, all thunar windows block, normally when changing a folder, mostly when the new folder contains lots of files.
On a newer systems with newer thunars and with thumbnailers there is some extra cpu action but still within 10%. One big box uses a 64GB tmpfs where 10-15 video streams may be active and a avidemux save still happens at 30-50k fps. I see 1-2 second pauses in the interface every so often, never actually interrupting anything.
I don't think it's a performance issue. I think it's a race condition or an uncatched error which happens while trying to get unsupported/broken matadata.
It's a condition which might start after 2 hours of working, and then never stops anymore until rebooting. every day.
Last edited by daal (2023-06-01 23:41:57)
Offline
I had the same problem three times last month (Thunar using 100% of the CPU) each time while I was using the internet browser).
i am using thunar 1.6.18 and gvfs 1.36.3 (compiler with -Dgoa = false and -Dgoogle = false).
I have disabled javascript in the internet browser and I no longer had a problem.
Offline
[ Generated in 0.016 seconds, 9 queries executed - Memory usage: 730.33 KiB (Peak: 779.17 KiB) ]