March 3rd Somtime the picture of my profile is bugging i think it doesnβt load right or something.
Same here
Hello I already made a Topic but nothing has changed so here I am again. Every time Iβm tabbing out my Profile picture is bugging and when I tab back it is normal. I donβt know why hoping that this would be fixed soon
No need for a second thread then.
We know about the issue but this is currently not on highest priority.
Hey there,
When I tap out of TeamSpeak 5 to my desktop or another program the profile picture goes 75% dark
I donβt know how that happens or why that happens, but I really would like to have that fixed. To me, it seems like a bug, but I donβt know. Maybe it is fixable somehow?
This is chrome doing it and we hope that the next version will fix this.
But till the version was released as a stable version we have to live with that rending bug.
We guess it will take 3-5 weeks till the update for this is public.
I am glad that you know what the problem source is. Thanks for the response, keep up the good work <3
Merry Christmas, TeamSpeak. I got a present for you!
I ran some tests on the First Frame Extractor (FFE) to identify some of the underlying problems causing the occasional erroneous extractions. In this post, I will run through my ideas, tests, and conclusions.
What even is the problem?
When the blur
event is emitted from the frontend, all image sources within the client, like avatars and file transfers, are changed to contain the ts-frame=1
option. This is used to turn off animations when the client is not focused. In the background, this option will trigger the FFE to generate a simple static image. For non-moving images, this should not make any difference. It should return the first frame for animated images, like APNGs or GIFs. However, the FFE sometimes returns a broken image, usually as a prematurely truncated stream. This manifests in the image having the correct size, as the shape is part of the file header but is transparent, starting from a random pixel and going downwards.
On the left is an expected image, and on the right is the same, but broken one.
Where does this happen?
At this point, it is unclear where exactly this error occurs in the pipeline. The data returned from the GET
request is already compromised when the error occurs, thus excluding the render pipeline of the chromium-embedded framework (CEF). As I cannot access the source code and running the released client in a debugger is not feasible, I can only run tests on the entire pipeline between web request and response. This makes any conclusion on the origin of the bug pure speculation.
How does this happen?
My first assumption was congestion in either the handler for the custom tsic
protocol used within the client or the FFE itself. This can be tested by measuring the percentage of incorrect results returned from the FFE, given a changing interval between requests. It is to be noted that in this test, only image byte size is compared, thus potentially excluding other errors. The avatar image used for the test has an expected size of 96947 Bytes. The following code was used for this test:
let tries = 100;
for (timeout in [0, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]) {
for (let k = 0; k < 5; k++) {
let failed = 0;
for (let i = 0; i < tries; i++) {
fetch("tsic://avatar/<redacted>?ts-frame=1", {
cache: "no-store",
mode: "no-cors"
}).then(resp => resp.arrayBuffer()).then((i) => i.byteLength != 96947 && failed++);
await new Promise((res, rej) => setTimeout(res, timeout));
}
console.error(`${failed}/${tries} failed with timeout ${timeout}`);
}
}
Executing this yields the following results.
The y
axis shows the fails per 100 requests. The different boxes show the results per interval. It is evident that there is no significant difference between the various intervals. This suggests that the fault is not caused by congestion. As such, all following tests will be performed without a timeout. Overall, an error rate of 2.82 % can be observed.
My next idea was to compare the performance of the FFE, given different images. This is also a relatively simple test after bringing the client into a state with a few images shown. For this test, the following code was used:
let tries = 1000;
let images = [].slice.call(document.getElementsByTagName("IMG")).map((image) => image.src);
images = new Set(images.filter((image) => image.endsWith("ts-frame=1") && image.startsWith("tsic://")));
Promise.all(images.values().map(async (image) => {
let expected_size = await fetch(image, {
cache: "no-store",
mode: "no-cors"
}).then((r) => r.arrayBuffer()).then((i) => i.byteLength);
let output = [];
for (let k = 0; k < 5; k++) {
let failed = 0;
for (let j = 0; j < tries; j++) {
await fetch(image, {
cache: "no-store",
mode: "no-cors"
}).then((r) => r.arrayBuffer()).then((i) => i.byteLength != expected_size && failed++);
}
output.push(`${failed}/${tries} failed with image ${image}`);
}
return output;
})).then((r) => {
for (file of r) {
for (row of file) {
console.error(row);
}
}
})
An interesting observation can be made from the results. Some of the files seem never to fail. Each cell shows the number of failures on 1000 tries.
Avatar 1 | Avatar 2 | Avatar 3 | Avatar 4 | Avatar 5 | Avatar 6 | Avatar 7 | Avatar 8 | Avatar 9 | Avatar 10 | Avatar 11 | Avatar 12 | Avatar 13 | Avatar 14 | Avatar 15 | Avatar 16 | Avatar 17 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10 | 0 | 10 | 0 | 0 | 0 | 0 | 14 | 4 | 11 | 2 | 5 | 0 | 4 | 9 | 7 | 8 |
7 | 0 | 10 | 0 | 0 | 0 | 0 | 10 | 2 | 10 | 2 | 2 | 0 | 7 | 11 | 13 | 15 |
14 | 0 | 13 | 0 | 0 | 0 | 0 | 12 | 9 | 13 | 2 | 5 | 0 | 7 | 12 | 13 | 4 |
14 | 0 | 23 | 0 | 0 | 0 | 0 | 24 | 5 | 39 | 4 | 5 | 0 | 6 | 48 | 44 | 23 |
52 | 0 | 143 | 0 | 0 | 0 | 0 | 131 | 40 | 130 | 11 | 15 | 0 | 32 | 215 | 201 | 117 |
I ran the UNIX file
command on each image to identify potential differences in these files. However, all images result in more or less the same result, only differing slightly in size. This difference does not correlate to the failures.
PNG image data, 320 x 320, 8-bit/color RGBA, non-interlaced
However, when simply considering the file size, it can be observed that all files that exhibit erroneous behavior are larger than those that donβt:
Avatar 1: 114581 bytes
Avatar 2: 56227 bytes # no failures
Avatar 3: 151264 bytes
Avatar 4: 60282 bytes # no failures
Avatar 5: 52297 bytes # no failures
Avatar 6: 28633 bytes # no failures
Avatar 7: 6868 bytes # no failures
Avatar 8: 141947 bytes
Avatar 9: 103297 bytes
Avatar 10: 192326 bytes
Avatar 11: 82214 bytes
Avatar 12: 85938 bytes
Avatar 13: 54199 bytes # no failures
Avatar 14: 99179 bytes
Avatar 15: 196727 bytes
Avatar 16: 237844 bytes
Avatar 17: 150776 bytes
This suggests the error starts between 60282
and 82214
Bytes, presumably 2^16
. This leads me to believe the FFE processes images in buffers of said size. By slightly modifying the code from the previous test, we can check whether the broken images are cut off at a multiple of 2^16
bytes:
let tries = 1000;
let images = [].slice.call(document.getElementsByTagName("IMG")).map((image) => image.src);
images = new Set(images.filter((image) => image.endsWith("ts-frame=1") && image.startsWith("tsic://")));
let failed = 0;
let expected_failure = 0;
Promise.all(images.values().map(async (image) => {
let expected_size = await fetch(image, {
cache: "no-store",
mode: "no-cors"
}).then((r) => r.arrayBuffer()).then((i) => i.byteLength);
for (let k = 0; k < 5; k++) {
for (let j = 0; j < tries; j++) {
await fetch(image, {
cache: "no-store",
mode: "no-cors"
}).then((r) => r.arrayBuffer()).then((i) => {
if (i.byteLength != expected_size) {
if (i.byteLength % 2**16) {
expected_failure++;
} else {
failed++;
}
}
});
}
}
return output;
})).then((r) => {
console.error(`${failed}/${tries*images.size*5} failed unexpectedly`);
console.error(`${expected_failure}/${tries*images.size*5} failed on multiple of 2^16`);
})
This presents the following results:
244/145000 failed unexpectedly
899/145000 failed on multiple of 2^16
Given that 78.65% of all broken images are truncated on a multiple of 2^16
, I ran further tests on 300 tries over 22 randomly chosen images. The source for this test suite can be found on Github.
π Analyzing subject: 1
π Expected size: 5284 bytes
β
No broken files
π Analyzing subject: 2
π Expected size: 6868 bytes
β
No broken files
π Analyzing subject: 3
π Expected size: 13727 bytes
β
No broken files
π Analyzing subject: 4
π Expected size: 7031 bytes
β
No broken files
π Analyzing subject: 5
π Expected size: 12109 bytes
β
No broken files
π Analyzing subject: 6
π Expected size: 7983 bytes
β
No broken files
π Analyzing subject: 7
π Expected size: 20491 bytes
β
No broken files
π Analyzing subject: 8
π Expected size: 20602 bytes
β
No broken files
π Analyzing subject: 9
π Expected size: 28633 bytes
β
No broken files
π Analyzing subject: 10
π Expected size: 45525 bytes
β
No broken files
π Analyzing subject: 11
π Expected size: 42277 bytes
β
No broken files
π Analyzing subject: 12
π Expected size: 56227 bytes
β
No broken files
π Analyzing subject: 13
π Expected size: 52297 bytes
β
No broken files
π Analyzing subject: 14
π Expected size: 107862 bytes
β οΈ Found 21.33% broken files (64/300)
π Checking 63 broken files (21.00%) of size 65536
β
All files are the same
β
The files are truncated at a multiple of 2^16
π Checking 1 broken files (0.33%) of size 173398
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65536 and the end
π₯ The files have 65536 duplicate bytes in the middle
π Analyzing subject: 15
π Expected size: 114581 bytes
β οΈ Found 10.67% broken files (32/300)
π Checking 32 broken files (10.67%) of size 65536
β
All files are the same
β
The files are truncated at a multiple of 2^16
π Analyzing subject: 16
π Expected size: 122237 bytes
β οΈ Found 11.33% broken files (34/300)
π Checking 33 broken files (11.00%) of size 65536
β
All files are the same
β
The files are truncated at a multiple of 2^16
π Checking 1 broken files (0.33%) of size 187773
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65536 and the end
π₯ The files have 65536 duplicate bytes in the middle
π Analyzing subject: 17
π Expected size: 126071 bytes
β οΈ Found 6.33% broken files (19/300)
π Checking 17 broken files (5.67%) of size 65536
β
All files are the same
β
The files are truncated at a multiple of 2^16
π Checking 2 broken files (0.67%) of size 191607
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65536 and the end
π₯ The files have 65536 duplicate bytes in the middle
π Analyzing subject: 18
π Expected size: 150717 bytes
β οΈ Found 9.00% broken files (27/300)
π Checking 25 broken files (8.33%) of size 85181
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65535 and the end
π₯ The files are missing 65536 bytes in the middle
π Checking 2 broken files (0.67%) of size 216253
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65536 and the end
π₯ The files have 65536 duplicate bytes in the middle
π Analyzing subject: 19
π Expected size: 162948 bytes
β οΈ Found 9.00% broken files (27/300)
π Checking 23 broken files (7.67%) of size 97412
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65535 and the end
π₯ The files are missing 65536 bytes in the middle
π Checking 4 broken files (1.33%) of size 228484
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65536 and the end
π₯ The files have 65536 duplicate bytes in the middle
π Analyzing subject: 20
π Expected size: 165035 bytes
β οΈ Found 5.00% broken files (15/300)
π Checking 15 broken files (5.00%) of size 99499
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65535 and the end
π₯ The files are missing 65536 bytes in the middle
π Analyzing subject: 21
π Expected size: 201462 bytes
β οΈ Found 6.67% broken files (20/300)
π Checking 19 broken files (6.33%) of size 135926
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65535 and the end
π₯ The files are missing 65536 bytes in the middle
π Checking 1 broken files (0.33%) of size 131072
β
All files are the same
β
The files are truncated at a multiple of 2^16
π Analyzing subject: 22
π Expected size: 186723 bytes
β οΈ Found 7.33% broken files (22/300)
π Checking 21 broken files (7.00%) of size 121187
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65535 and the end
π₯ The files are missing 65536 bytes in the middle
π Checking 1 broken files (0.33%) of size 252259
β
All files are the same
β The files exhibit an unexpected size
π The files differ in content at offset 65536 from the expectation
π° The unexpected content matches the expectation between offset 65536 and the end
π₯ The files have 65536 duplicate bytes in the middle
As you can see, ALL images are broken at the magic number of 2^16 = 65536
. An interesting observation is that some broken files are even larger than the original image in file size yet are visually still truncated. For all broken images considered here, either the second 65536 Bytes are missing, or the first 65536 Bytes are duplicated.
What does this all mean?
Given my findings, I presume this problem stems from a race condition or a data race. Somehow the FFE fails to properly handle files larger than 2^16
Bytes in some cases. The failure itself appears deterministic.
What can TeamSpeak do to fix this?
Use my findings to identify and fix the bug.
And now, for anyone who has read through this analysis, have a merry Christmas and a happy New Year!
This is very helpful (Dev was saying).
Letβs see what we can do out of this. Hope this something we can fix from our position.
Why donβt temporarily disable GIF support for profile images?
Based on my observations, it seems that the ts-client appends βts-frame=1β to the element when the window is unfocused, causing confusion in Chrome due to the changing βsrc.β As a temporary solution, why not consider commenting out this function and use a normal static src?
This annoying bug is in there for years and the feature to use GIFs as a profile image is disabled anyway
This is not really related to GIFs. It mainly happens to PNGs (I think).
You can see a bit more here On the First Frame Extractor. Letβs hope it will get fixed soon.