Redshift sleep experiment

Self-experiment on whether screen-tinting software such as Redshift/f.lux affect sleep times and sleep quality; Redshift lets me sleep earlier but doesn’t improve sleep quality.
experiments, biology, statistics, decision-theory, Zeo, shell, R, power-analysis
2012-05-092019-12-27 finished certainty: highly likely importance: 7


I ran a ran­dom­ized ex­per­i­ment with a free pro­gram (Red­shift) which red­dens screens at night to avoid tam­per­ing with mela­tonin se­cre­tion & the sleep from 2012–2013, mea­sur­ing sleep changes with my Zeo. With 533 days of data, the main re­sult is that Red­shift causes me to go to sleep half an hour ear­lier but oth­er­wise does not im­prove sleep qual­i­ty.

My ear­lier mela­tonin ex­per­i­ment found it helped me sleep. Mela­tonin se­cre­tion is also in­flu­enced by the color of light (some ref­er­ences can be found in my ), specifi­cally blue light tends to sup­press mela­tonin se­cre­tion while red­der light does not affect it. (This makes sense: blue/white light is as­so­ci­ated with the bright­est part of the day, while red­dish light is the color of sun­set­s.) Elec­tron­ics and com­puter mon­i­tors fre­quently emit white or blue light. (The re­cent trend of bright blue LEDs is par­tic­u­larly de­plorable in this re­gard.) Be­sides the plau­si­ble sug­ges­tion about mela­ton­in, red­dish light im­pairs night vi­sion less and is eas­ier to see un­der dim con­di­tions: you may want a blaz­ing white screen at noon so you can see some­thing, but in a night set­ting, that is like star­ing for hours straight into a flu­o­res­cent light.

Hence, you would like to both dim your mon­i­tor and also shift the color tem­per­a­ture to­wards the cooler red­der end of the spec­trum with a util­ity like or .

But does it ac­tu­ally work? And does it work in ad­di­tion to my usual mela­tonin sup­ple­men­ta­tion of 1–1.5mg? An ex­per­i­ment is called for!

The sug­gested mech­a­nism is through mela­tonin se­cre­tion. So we’d look at all the usual sleep met­rics plus mood plus an ad­di­tional one: what time I go to bed. One of the rea­sons I be­came in­ter­ested in mela­tonin was as a way of get­ting my­self to go to bed rather than stay up un­til 3 AM—a chem­i­cally en­forced bed­time—and it seems plau­si­ble that if Red­shift is re­duc­ing the in­ter­fer­ence of the com­puter mon­i­tor, it will make me stay up later (“but I don’t feel sleepy yet”).

Design

Power calculation

The ear­lier mela­tonin ex­per­i­ment found some­what weak effects with >100 days of data, and one would ex­pect that ac­tu­ally con­sum­ing 1.5mg of mela­tonin would be a stronger in­ter­ven­tion than sim­ply shift­ing my lap­top screen col­or. (What if I don’t use my lap­top that night? What if I’m sur­rounded by white light­s?) 30 days is prob­a­bly too small, judg­ing from the other ex­per­i­ments; 60 is more rea­son­able, but 90 feels more plau­si­ble.

It may be time to learn some more sta­tis­tics, specifi­cally how to do s for . As I un­der­stand it, a power cal­cu­la­tion is an equa­tion bal­anc­ing your sam­ple size, the effect size, and the sig­nifi­cance level (eg the old p < 0.05); if you have 2, you can de­duce the third. So if you al­ready knew your sam­ple size and your effect size, you could pre­dict what sig­nifi­cance your re­sults would have. In this spe­cific case, we can spec­ify our sig­nifi­cance at the usual lev­el, and we can guess at the effect size, but we want to know what sam­ple size we should have.

Let’s pin down the effect size: we ex­pect any Red­shift effect to be weaker than mela­tonin sup­ple­men­ta­tion, and the most strik­ing change in mela­tonin (the re­duc­tion in to­tal sleep time by ~50 min­utes) had an effect size of 0.37. As usu­al, R has a bunch of func­tions we can use. Steal­ing shame­lessly from an R guide, and reusing the means and stan­dard de­vi­a­tions from the mela­tonin ex­per­i­ment, we can be­gin ask­ing ques­tions like: “sup­pose I wanted a 90% chance of my ex­per­i­ment pro­duc­ing a solid re­sult of p > 0.01 (not 0.05, so I can do mul­ti­ple cor­rec­tion) if the Red­shift data looks like the mela­tonin data and acts the same way?”

install.packages("pwr", depend = TRUE)
library(pwr)
pwr.t.test(d=(456.4783-407.5312)/131.4656,power=0.9,sig.level=0.01,type="paired",alternative="greater")
#
#      Paired t test power calculation
#
#               n = 96.63232
#               d = 0.3723187
#       sig.level = 0.01
#           power = 0.9
#     alternative = greater
#
#  NOTE: n is number of *pairs*

n is pairs of days, so each n is one day on, one day off; so it re­quires 194 days! Ouch, but OK, that was mak­ing some as­sump­tions. What if we say the effect size was halved?

pwr.t.test(d=((456.4783-407.5312)/131.4656)/2,power=0.9,sig.level=0.01,type="paired",alternative="greater")
#
#      Paired t test power calculation
#
#               n = 378.3237

That’s much worse (as one should ex­pec­t—the smaller an effect or de­sired p-value or chance you don’t have the power to ob­serve it, the more data you need to see it). What if we weaken the power and sig­nifi­cance level to 0.5 and 0.05 re­spec­tive­ly?

pwr.t.test(d=((456.4783-407.5312)/131.4656)/2,power=0.5,sig.level=0.05,type="paired",alternative="greater")
#
#      Paired t test power calculation
#
#               n = 79.43655
#               d = 0.1861593

This is more rea­son­able, since n = 80 or 160 days will fit within the ex­per­i­ment but look at what it cost us: it’s now a coin-flip that the re­sults will show any­thing, and they may not pass mul­ti­ple cor­rec­tion ei­ther. But it’s also ex­pen­sive to gain more cer­tain­ty—if we halve that 50% chance of find­ing noth­ing, it dou­bles the num­ber of pairs of days we need from 79 to 157:

pwr.t.test(d=((456.4783-407.5312)/131.4656)/2,power=0.75,sig.level=0.05,type="paired",alternative="greater")
#
#      Paired t test power calculation
#
#               n = 156.5859
#               d = 0.1861593

Sta­tis­tics is a harsh mas­ter. What if we solve the equa­tion for a differ­ent vari­able, power or sig­nifi­cance? Maybe I can han­dle 200 days, what would 100 pairs buy me in terms of pow­er?

pwr.t.test(d=((456.4783-407.5312)/131.4656)/2,n=100,sig.level=0.05,type="paired",alternative="greater")
#
#      Paired t test power calculation
#
#               n = 100
#               d = 0.1861593
#       sig.level = 0.05
#           power = 0.5808219

Just 58%. (But at p = 0.01, n = 100 only buys me 31% pow­er, so it could be worse!) At 120 pairs/240 days, I get 65% pow­er, so it may all be doable. I guess it’ll de­pend on cir­cum­stances: ide­al­ly, a Red­shift trial will in­volve no work on my part, so the real ques­tion be­comes what quicker sleep ex­per­i­ments does it stop me from run­ning and how long can I afford to run it? Would it painfully over­lap with things like the lithium tri­al?

Speak­ing of the lithium tri­al, the plan is to run it for a year. What would 2 years of Red­shift data buy me even at p = 0.01?

pwr.t.test(d=((456.4783-407.5312)/131.4656)/2,n=365,sig.level=0.01,type="paired",alternative="greater")
#
#      Paired t test power calculation
#
#               n = 365
#               d = 0.1861593
#       sig.level = 0.01
#           power = 0.8881948

Ac­cept­able.

Experiment

How ex­actly to run it? I don’t ex­pect any bleed-over from day to day, so we ran­dom­ize on a per-day ba­sis. Each day must ei­ther have Red­shift run­ning or not. Red­shift is run from every 15 min­utes: */15 * * * * redshift -o. (This is to deal with lo­gouts, shut­downs, freezes etc, that might kill Red­shift as a per­sis­tent dae­mon.) We’ll change the code to at the be­gin­ning of each day run:

@daily redshift -x; if ((RANDOM \% 2 < 1));
          then touch ~/.redshift; echo `date +"\%d \%b \%Y"`: on >> ~/redshift.log;
          else rm ~/.redshift; echo `date +"\%d \%b \%Y"`: off >> ~/redshift.log; fi

Then the Red­shift call sim­ply in­cludes a check for the file’s ex­is­tence:

   */15  * * * * if [ -f ~/.redshift ]; then redshift -o; fi

Now we have com­pletely au­to­matic ran­dom­iza­tion and log­ging of the ex­per­i­ment. As long as I don’t screw things up by delet­ing ei­ther file or unin­stalling Red­shift, and I keep us­ing my Zeo, all the data is gath­ered and la­beled nicely un­til I fin­ish the ex­per­i­ment and do the analy­sis. Non-blind­ed, or per­haps I should say qua­si­-blind­ed—I ini­tially don’t know, but I can check the logs or file to see what that day was, and I will at some point in the night no­tice whether the mon­i­tor is red­dened or not.

As it turned out, I re­ceived a proof that I was not notic­ing the ran­dom­iza­tion. On 2013-01-11, due to In­ter­net con­nec­tiv­ity prob­lems, I was idling on my com­puter and thought to my­self that I had­n’t no­ticed Red­shift turn my screen salmon-col­ored in a while, and I hap­pened to idly try redshift -x (re­set the screen to nor­mal) and then redshift -o (im­me­di­ately turn the screen red)—but nei­ther did any­thing at all. Busy with other things, I set the anom­aly aside un­til a few days lat­er, I traced the prob­lem to a pack­age I had unin­stalled back in 2012-09-25 be­cause my sys­tem did­n’t use it—which it did not, but this had the effect of re­mov­ing an­other pack­age which turned out to set the de­fault video dri­ver to the proper dri­ver, and so re­mov­ing it forced my sys­tem to a more prim­i­tive dri­ver which ap­par­ently did not sup­port Red­shift func­tion­al­ity1! And I had not no­ticed for 3 solid months. This was a frus­trat­ing in­ci­dent, but since it took me so long to no­tice, I am go­ing to keep the 3 months’ data and keep them in the ‘off’ cat­e­go­ry—this is not nearly as good as if those 3 months had var­ied (s­ince now the ‘on’ cat­e­gory will be un­der­pop­u­lat­ed), but it seems bet­ter than just delet­ing them all.

So to re­cap: the ex­per­i­ment is 100+ days with Red­shift ran­dom­ized on or off by a shell script, affect­ing the usual sleep met­rics plus time of bed. The ex­pec­ta­tion is that lack of Red­shift will pro­duce a weak neg­a­tive effect: in­creas­ing awak­en­ings & time awake & light sleep, in­creas­ing over­all sleep time, and also push­ing back bed­time.

VoI

Like the modafinil day tri­al, this is an­other val­ue-less ex­per­i­ment jus­ti­fied by its in­trin­sic in­ter­est. I ex­pect the re­sults will con­firm what I be­lieve: that red-t­int­ing my lap­top screen will re­sult in less dam­age to my sleep by not forc­ing lower mela­tonin lev­els with blue light. The only out­come that might change my de­ci­sions is if the use of Red­shift ac­tu­ally wors­ens my sleep, but I re­gard this as highly un­like­ly. It is cheap to run as it is pig­gy­back­ing on other ex­per­i­ments, and all the ran­dom­iz­ing & data record­ing is be­ing han­dled by 2 sim­ple shell scripts.

Data

The ex­per­i­ment ran from 2012-05-11 to 2013-11-04, in­clud­ing the un­for­tu­nate Jan­u­ary 2013 pe­ri­od, with n = 533 days. I stopped it at that point, hav­ing reached the 100+ goal and since I saw no point in con­tin­u­ing to dam­age my sleep pat­terns to gain more da­ta.

Analysis

Pre­pro­cess­ing:

redshift <- read.csv("https://www.gwern.net/docs/zeo/2012-2013-gwern-zeo-redshift.csv")
redshift <- subset(redshift, select=c(Start.of.Night, Time.to.Z, Time.in.Wake, Awakenings,
                                      Time.in.REM, Time.in.Light, Time.in.Deep, Total.Z, ZQ,
                                      Morning.Feel, Redshift, Date))
redshift$Date <- as.Date(redshift$Date, format="%F")

redshift$Start.of.Night <- sapply(strsplit(as.character(redshift$Start.of.Night), " "), function(x) { x[2] })
## Parse string timestamps to convert "06:45" to 24300:
interval <- function(x) { if (!is.na(x)) { if (grepl(" s",x)) as.integer(sub(" s","",x))
                                           else { y <- unlist(strsplit(x, ":")); as.integer(y[[1]])*60 + as.integer(y[[2]]); }
                                         }
                          else NA
                        }
redshift$Start.of.Night <- sapply(redshift$Start.of.Night, interval)
## Correct for the switch to new unencrypted firmware in March 2013;
redshift[(as.Date(redshift$Date) >= as.Date("2013-03-11")),]$Start.of.Night <-
  (redshift[(as.Date(redshift$Date) >= as.Date("2013-03-11")),]$Start.of.Night + 900) %% (24*60)

## after midnight (24*60=1440), Start.of.Night wraps around to 0, which obscures any trends,
## so we'll map anything before 7AM to time+1440
redshift[redshift$Start.of.Night<420 & !is.na(redshift$Start.of.Night),]$Start.of.Night <-
 (redshift[redshift$Start.of.Night<420 & !is.na(redshift$Start.of.Night),]$Start.of.Night + (24*60))

De­scrip­tive:

library(skimr)
skim(redshift)
# Skim summary statistics
#  n obs: 533
#  n variables: 12
#
# ── Variable type:Date ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
#  variable missing complete   n        min        max     median n_unique
#      Date       0      533 533 2012-05-11 2013-11-04 2013-02-11      530
#
# ── Variable type:factor ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
#  variable missing complete   n n_unique                top_counts ordered
#  Redshift       0      533 533        2  FA: 313,  TR: 220, NA: 0   FALSE
#
# ── Variable type:integer ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
#       variable missing complete   n   mean    sd p0    p25 p50    p75 p100     hist
#     Awakenings      17      516 533   7     2.92  0   5      7   9      18 ▁▃▆▇▂▁▁▁
#   Morning.Feel      20      513 533   2.72  0.84  0   2      3   3       4 ▁▁▁▅▁▇▁▂
#   Time.in.Deep      17      516 533  62.37 12.07  0  56     63  70      98 ▁▁▁▂▆▇▂▁
#  Time.in.Light      17      516 533 284.39 43.24  0 267    290 308     411 ▁▁▁▁▂▇▃▁
#    Time.in.REM      17      516 533 166.85 30.99  0 150    172 186     235 ▁▁▁▁▃▇▆▂
#   Time.in.Wake      17      516 533  22.45 19.06  0  12     19  28     276 ▇▁▁▁▁▁▁▁
#      Time.to.Z      17      516 533  23.92 13.08  0  16.75  22  30     135 ▃▇▂▁▁▁▁▁
#        Total.Z      17      516 533 513.1  71.97  0 491    523 553.25  695 ▁▁▁▁▁▆▇▁
#             ZQ      17      516 533  92.11 13.54  0  88     94 100     123 ▁▁▁▁▁▆▇▁
#
# ── Variable type:numeric ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
#        variable missing complete   n    mean    sd   p0  p25  p50  p75 p100     hist
#  Start.of.Night      17      516 533 1444.06 49.72 1020 1415 1445 1475 1660 ▁▁▁▁▅▇▁▁

## Correlation matrix:
## Drop date; convert Redshift boolean to 0/1; use all non-missing pairwise observations; round
redshiftInteger<- cbind(redshift[,-(11:12)], Redshift=as.integer(redshift$Redshift))
round(digits=2, cor(redshiftInteger, use="complete.obs"))
#                Start.of.Night Time.to.Z Time.in.Wake Awakenings Time.in.REM Time.in.Light Time.in.Deep Total.Z    ZQ Morning.Feel Redshift
# Start.of.Night           1.00
# Time.to.Z                0.19      1.00
# Time.in.Wake             0.05      0.23         1.00
# Awakenings               0.22      0.17         0.36       1.00
# Time.in.REM              0.08     -0.09        -0.14       0.29        1.00
# Time.in.Light            0.05     -0.08        -0.18       0.23        0.56          1.00
# Time.in.Deep             0.08     -0.08        -0.03       0.15        0.40          0.30         1.00
# Total.Z                  0.08     -0.10        -0.17       0.29        0.84          0.90         0.53    1.00
# ZQ                       0.06     -0.15        -0.31       0.13        0.84          0.81         0.64    0.97  1.00
# Morning.Feel             0.04     -0.22        -0.24      -0.01        0.38          0.37         0.27    0.44  0.46         1.00
# Redshift                -0.19     -0.07         0.03      -0.04       -0.01         -0.03        -0.14   -0.05 -0.07         0.04     1.00

library(ggplot2)
qplot(Date, Start.of.Night, color=Redshift, data=redshift) +
  coord_cartesian(ylim=c(1340,1550)) + stat_smooth() +
  ## Reverse logical color coding, to show that red-tinted screens = earlier bedtime:
  scale_color_manual(values=c("blue", "red"))
Plot­ting the start of bed­time over time, col­ored by use of Red­shift

Analy­sis:

l <- lm(cbind(Start.of.Night, Time.to.Z, Time.in.Wake, Awakenings, Time.in.REM, Time.in.Light,
              Time.in.Deep, Total.Z, ZQ, Morning.Feel) ~ Redshift, data=redshift)
summary(manova(l))
##            Df     Pillai approx F num Df den Df     Pr(>F)
## Redshift    1 0.07276182 3.939272     10    502 3.4702e-05
## Residuals 511
summary(l)
## Response Start.of.Night :
## ...Coefficients:
##                 Estimate Std. Error   t value   Pr(>|t|)
## (Intercept)   1452.03333    2.82465 514.05791 < 2.22e-16
## Redshift TRUE  -18.98169    4.38363  -4.33014 1.7941e-05
##
## Response Time.to.Z :
## ...Coefficients:
##               Estimate Std. Error  t value Pr(>|t|)
## (Intercept)   24.72333    0.75497 32.74743  < 2e-16
## Redshift TRUE -1.87826    1.17165 -1.60309  0.10953
##
## Response Time.in.Wake :
## ...Coefficients:
##               Estimate Std. Error  t value Pr(>|t|)
## (Intercept)   22.13333    1.10078 20.10704   <2e-16
## Redshift TRUE  1.02160    1.70831  0.59801   0.5501
##
## Response Awakenings :
## ...Coefficients:
##                Estimate Std. Error  t value Pr(>|t|)
## (Intercept)    7.133333   0.167251 42.65047  < 2e-16
## Redshift TRUE -0.260094   0.259560 -1.00206  0.31679
##
## Response Time.in.REM :
## ...Coefficients:
##                 Estimate Std. Error  t value Pr(>|t|)
## (Intercept)   167.713333   1.732029 96.83056  < 2e-16
## Redshift TRUE  -0.849484   2.687968 -0.31603  0.75211
##
## Response Time.in.Light :
## ...Coefficients:
##                Estimate Std. Error   t value Pr(>|t|)
## (Intercept)   286.20667    2.43718 117.43332  < 2e-16
## Redshift TRUE  -2.98601    3.78231  -0.78947  0.43021
##
## Response Time.in.Deep :
## ...Coefficients:
##                Estimate Std. Error t value   Pr(>|t|)
## (Intercept)   63.863333   0.686927 92.9696 < 2.22e-16
## Redshift TRUE -3.436103   1.066055 -3.2232  0.0013486
##
## Response Total.Z :
## ...Coefficients:
##                Estimate Std. Error   t value Pr(>|t|)
## (Intercept)   517.29333    4.00740 129.08467  < 2e-16
## Redshift TRUE  -7.30272    6.21915  -1.17423  0.24085
##
## Response ZQ :
## ...Coefficients:
##                Estimate Std. Error   t value Pr(>|t|)
## (Intercept)   93.053333   0.758674 122.65253  < 2e-16
## Redshift TRUE -1.799812   1.177401  -1.52863  0.12697
##
## Response Morning.Feel :
## ...Coefficients:
##                Estimate Std. Error  t value Pr(>|t|)
## (Intercept)   2.6933333  0.0485974 55.42136   <2e-16
## Redshift TRUE 0.0719249  0.0754192  0.95367   0.3407
wilcox.test(Start.of.Night ~ Redshift, conf.int=TRUE, data=redshift)
#
#    Wilcoxon rank sum test with continuity correction
#
# data:  Start.of.Night by Redshift
# W = 39789, p-value = 6.40869e-06
# alternative hypothesis: true location shift is not equal to 0
# 95% confidence interval:
#   9.9999821 24.9999131
# sample estimates:
# difference in location
#             15.0000618

To sum­ma­rize:

Mea­sure­ment Effect Units Good­ness p
Start of Night −18.98 min­utes + <0.001
Time to Z −1.88 min­utes + 0.11
Time Awake +1.02 min­utes 0.55
Awak­en­ings −0.26 count + 0.32
Time in REM −0.85 minut es − 0. 75
Time in Light −2.98 min­utes + 0.43
Time in Deep −3.43 min­utes 0.001
To­tal Sleep −7.3 min­utes 0.24
ZQ −1.79 ? (in­dex) 0.13
Morn­ing feel +0.07 1–5 scale + 0.34

Bayes

In De­cem­ber 2019, I at­tempted to re-an­a­lyze the data in a Bayesian model us­ing , to model the cor­re­la­tions & take any tem­po­ral trends into ac­count with spline term, fit­ting a model like this:

library(brms)
bz <- brm(cbind(Start.of.Night, Time.to.Z, Time.in.Wake, Awakenings, Time.in.REM, Time.in.Light,
              Time.in.Deep, Total.Z, ZQ, Morning.Feel) ~ Redshift + s(as.integer(Date)), chains=30,
              control = list(max_treedepth=13, adapt_delta=0.95),
              data=redshift)
bz

Un­for­tu­nate­ly, it proved com­pletely com­pu­ta­tion­ally in­tractable, yield­ing only a few effec­tive sam­ples after sev­eral hours of MCMC, and the usual fixes of tweak­ing the control pa­ra­me­ter and adding semi­-in­for­ma­tive pri­ors did­n’t help at all. It’s pos­si­ble that the oc­ca­sional out­liers in Start.of.Night screw it up, and I need to switch to a dis­tri­b­u­tion bet­ter able to model out­liers (eg loosen the de­fault normal/Gaussian to a with family="student") or drop out­lier days. If those don’t work, it may be that sim­ply mod­el­ing them all like that is in­ap­pro­pri­ate: they all have differ­ent scales & dis­tri­b­u­tions. brms does­n’t sup­port full SEM mod­el­ing like blavaan, but it does sup­port mul­ti­vari­ate vari­ables with differ­ing dis­tri­b­u­tions, I be­lieve—it just re­quires a good deal more mess­ing with the model de­fi­n­i­tions. Since fit­ting was so slow, I dropped my re-analy­sis there.

Conclusion

Red­shift does in­flu­ence my sleep.

One be­lief—that Red­shift helped avoid bright light re­tard­ing the sleep cy­cle and en­abling go­ing to bed ear­ly—was borne: on Red­shift days, I went to bed an av­er­age of 19 min­utes ear­li­er. (I had no­ticed this in my ear­li­est Red­shift us­age in 2008 and no­ticed dur­ing the ex­per­i­ment that I seemed to be stay­ing up pretty late some night­s.) Since I value hav­ing a sleep sched­ule more like that of the rest of hu­man­ity and not sleep­ing past noon, this jus­ti­fies keep­ing Red­shift in­stalled.

But I am also sur­prised at the lack of effect on the other as­pects of sleep; I was sure Red­shift would lead to im­prove­ments in wak­ing and how I felt in the morn­ing, if noth­ing else. Yet, while the ex­act effect tends to be bet­ter for the most im­por­tant vari­ables, the effect es­ti­mates are rel­a­tively triv­ial (less than a tenth in­crease in av­er­age morn­ing feel? falling asleep 2 min­utes faster?) and sev­eral are worse—I’m a bit baffled why deep sleep de­creased, but it might be due to the lower to­tal sleep.

So it seems Red­shift is ex­cel­lent for shift­ing my bed­time for­ward, but I can’t say it does much else.


  1. The geeky de­tails: I found a er­ror line in the X logs which ap­peared only when I in­voked Red­shift; the dri­ver was fbdev and not the cor­rect radeon, which mys­ti­fied me fur­ther, un­til I read var­i­ous bug re­ports and fo­rum prob­lems and won­dered why radeon was not load­ing but the only non-fbdev er­ror mes­sage in­di­cated that some dri­ver called ati was fail­ing to load in­stead. Then I read that ati was the de­fault wrap­per over radeon, but then I saw that the pack­age was not in­stalled, in­stalled it, no­ticed it was pulling in as a de­pen­dency use­less Mach64 dri­vers, and had a flash: per­haps I had unin­stalled the use­less Mach64 dri­vers, forc­ing the pack­age pro­vid­ing ati to be unin­stalled too, per­mit­ted its unin­stal­la­tion be­cause I knew it was not the pack­age pro­vid­ing radeon, which then caused the ati load to fail and to not then load radeon but X suc­ceed­ing in load­ing fbdev which does not sup­port Red­shift, lead­ing to a per­ma­nent fail­ure of all uses of Red­shift. Phew! I was right.↩︎