1453

作者:单春晴

我,单春晴( 身份证号 330105199412120020 )授权任何人转载部分或全部的内容,并授权转载时对内容进行任意修改。转载时可自由选择是否注明出处和作者,但不可标注除单春晴以外的任何作者

罗马共和国的帝国时期,无论是建立,还是灭亡,都上感天象。凯撒被刺杀后,人类有史以来最明亮的彗星连续七天出现,罗马的史书和汉书都记载了这一天象,此后,罗马共和国进入帝国时期。在新罗马城被攻破的七天前,发生月食,七天后新罗马被攻破,罗马共和国灭亡。或许是偶然,或许是必然。直到今天,穆斯林社会仍然纪念着这个事件,土耳其1951,2012年拍摄的两部主旋律电影《买买提上城记》就是典型的例子。

按可以交叉印证的史料记载,新罗马被攻破的主要原因可以归结为以下三点

  1. 乌尔班大炮起到了心理威慑作用。虽然无法攻破城墙,只能打下一些瓦砾;但它的持续的一个月,每隔数小时发射一次的巨响,告诉了守军,买买提这次将用倾国之力来攻击城市:乌尔班大炮造价昂贵,并且会炸膛,但买买提仍然从后方源源不断地运输炮管和炮弹来攻击城市;经后人整理计算,光是乌尔班大炮一项,产生的支出,就高达30万到100万威尼斯金币,是奥斯曼帝国数年的岁入,也是当时罗马几十年的岁入。
  2. 金角湾的陆路无法防守,导致了奥斯曼海军通过陆路进入金角湾。这是有史以来从未出现过的战术,虽然奥斯曼海军进入金角湾之后,被罗马共和国的海军关门打狗,损失惨重;罗马共和国的海军直到破城时,都在海上压制着奥斯曼帝国海军。这和乌尔班大炮一样,是极高支出并且没有实质成果的作战方式,只是造成了相当的恐慌;罗马共和国完全指望海路运输的补给和人力来长期作战,金角湾被封锁会让补给和人力无法进入。事实上,封锁几乎毫无效果,威尼斯的小舰队在没有损失一船一人的情况下,毫发无损地通过金角湾进入港口,带来了根本没有海路运输的补给和人力的消息。如果奥斯曼帝国真的能够封锁海湾,可能罗马的崩溃还会晚许多。
  3. 守城部队的极度缺乏,导致城墙无人防守。买买提可以发动对每一寸城墙的人海攻击,但守军完全做不到防御每一寸城墙。买买提在确定大炮无效后,还尝试了挖地道来攻入城市,但罗马在极度缺乏人力的情况下,摧毁了所有地道。买买提的金角湾舰队出现后,守军还需要分散一部分人力去运作舰队。极其有限的守军,既要在大炮的恐惧下,每天面对潮水般总攻的部队,又要参与地道的反攻,还要分出一定的人力来运作海军。

即使如此,1453年仍然是一个意外。买买提的最后总攻,包括同时发射的十几门大炮,全部的军力和海军,靠着尸体堆平了壕沟,在城墙下同时进行地道挖掘和云梯攀登,仍然没有攻破城市;直到发现一个被瓦砾遮盖的小门没有守卫,甚至没有关闭:这在双方的记载中都认为是一个意外。买买提几乎是完全靠这个小门上的城。

人类历史从此转向,罗马的希腊语著作大量散失于各地,导致了文艺复兴;拜占庭城作为最重要的欧亚连接的端点,使得欧洲几乎完全失去和东亚的商贸,导致了大航海时代。如果那一扇门关闭了,人类的历史还会继续,但绝无可能出现文艺复兴,毕竟没书;绝无可能出现大航海时代,途径拜占庭辐射控制的区域的商队成本比大航海时代的船只成本低得多。

实际上,以君士坦丁十一世的军事天赋(从其年轻时指挥的大量“奇迹般”、“神助的”战役可以看出,职业化的罗马军团,在他的指挥下可以),如果对买买提上城的消息有所准备,罗马不会在这一天灭亡。当时的财富集中有两种方式,第一是政府的税收,这基本上是不存在的,虽然养活一批官僚没有问题,但不可能有任何多余的财富征召军队;第二是东正教的圣座,在东正教的影响范围中,一直有大量的、不属于罗马政治版图下的信徒朝圣进贡。在1450年,罗马共和国圣座如果开始组建宗教卫队,并且像东正教辐射的地区普遍进行号召,而不是在要不要和西方教会合并的问题上内斗,完全可以在1453年之前组建一两个职业罗马军团的兵力;这仍然和买买提的兵力,或者和罗马共和国历史上拥有的兵力,都相差巨大,但完全可以保证守城时好歹可以轮班休息。

人类历史就是一连串的意外,每一个意外都可能影响几千万,几十亿人的存亡……

很有趣,很残忍。

bilibili 2017拜年祭 再一次 猜测解析

作者:单春晴

我,单春晴( 身份证号 330105199412120020 )授权任何人转载部分或全部的内容,并授权转载时对内容进行任意修改。转载时可自由选择是否注明出处和作者,但不可标注除单春晴以外的任何作者

众所周知,有三个表面结局

  • 男死女活
  • 女死男活
  • 男女都活

很显然地,一切事物都与食物有关,所以这三个结局也被称为北京烤鸭结局,巫山烤鱼结局和东坡肉结局。根据设定,这三个结局是在平行世界循环/同时发生的,场景都在同一个古堡中。

在表面结局后,由于重返档案室,形成时间循环。故而表面结局并不是最终结局。由于是时间循环,最终结局其实在剧情发展中已经交代了,它们是

  • 男女主均死无人逃出
  • 女主死去后男主后单独逃出

在剧情过程中,有大量暗示说明了这些内容,比如

  • 城堡里只有女主的骸骨
  • 男主在本子中写满了“我就是魔鬼”
  • 骸骨越靠近档案室越多

现在根据之前的三个表面结局,论述一下未来(在叙述中是过去)的发展。

对于男死女活的烤鸭结局,女主可能没有等到男主,由于比较感性,刚刚被男主搭救,感情澎湃,在走廊里徘徊等待后死去。入口处各种坐姿的骸骨和走廊里的骸骨说明了这一点。一开始女主不开车和在车坏了以后的表现可以看出她不太会修车,故而这也有可能是她不出去的原因之一。或者,女主可能等到男主,则情节进入男女双活表面结局。

对于男活女死表面结局,比较简单。男主可以选择在城堡里等待,在等待许久无法等到女主后,自称非常理智的男主会走出城堡,拿他拿到的修车工具修好车开回去,门口的烟蒂之类的都暗示了这一点。如果等到女主,进入再次随机进入三个表面结局中的一个结局。

对于男女双活表面结局,很显然城堡里没有足够的食物(否则也就没有女主的骸骨了),所以白首双星是不可能的。由于城堡中没有男性骸骨,但有大量同一个人的女性骸骨,说明女主以某一种原因死了。女主感情影响理智让男主活下去存在可能,当然,考虑到本子里写满了“我就是魔鬼”,男主学习先导的优秀传统把女主杀死后逃出也不是没有可能,骸骨的分布和形态也说明了这一点,对于在走廊和入口附近的骸骨,其形态是自然死亡的背靠墙壁的坐姿,而对于在深处的骸骨,其形态有头下脚上的,有倒歪的,斜躺在楼梯上的,很少有坐姿和卧姿的,这些骸骨不太可能是正常死亡的。在男女双活的结局中女主会死也不只有骸骨的性别、形态和本子几处证据。在一个情景中,男女主在墙边讨论眼睛的问题,根据两个单活的表面结局,此时眼睛在哪一方谁就会在下一个死去。在双活结局中,这个场景并不是没有眼睛,而是眼睛在女主一方。所以,对于男女双活结局,男主会杀死女方或女方自杀。

故而,最终的结局是男女都死或者男活女死。所以,这个剧情说明了一个道理,男女分手了以后又不复合,还腻腻歪歪的去两人探险,即使不被FFF团当作假分手烧死,最终还是没有好结果的。

虽然是试验新的模式,但大过年的放FFF团的恐怖片真的好吗…

希尔伯特问题简述(上)

作者:单春晴

我,单春晴( 身份证号 330105199412120020 )授权任何人转载部分或全部的内容,并授权转载时对内容进行任意修改。转载时可自由选择是否注明出处和作者,但不可标注除单春晴以外的任何作者

希尔伯特问题集包括23个问题,是近代哲学上最受人关注的问题,启发了近代多领域的研究。之前零零碎碎对这些问题作了一些随笔,近几天把它们整理了一下。(数学证明什么的还是直接看相关论文吧…常见的数论/计算理论什么的书里面也会有几个,毕竟老问题了…每一个问题的证明都比较复杂…窝要是复述一遍估计说不定会跳过了关键步骤什么的zz

希尔伯特第一问题

描述:可列集合之无穷基数和实数集合基数之间不存在任何集合的基数
人话:没有比整数集合大,比实数集合小的集合
哲学含义:集合过渡的连续性,是否存在无穷多个无穷大,(感觉是不明显地向化圆为方致敬?一个可数一个连续的
现代结论:不能在ZFC下证否或证明
倾向性猜想:我觉得连续统假设在我看来,以我的直观上认为是错的(直观而已。映射的证明方法限制了证明思路,可能需要更靠谱的公理体系和证明方法才能证明/证否这一假设。这依赖于另一种公理化集合体系,先构造集合在设立公理的方法可能可以是公理化集合论的一个突破。但看上去好像没什么好办法zz

希尔伯特第二问题

描述:公理系统彼此之间相容性是可判定的
人话:可以证明一个系统是不是在定义上就自相矛盾
哲♂学意义:现代科学基本上都以公理+逻辑为基础,相对论等科学的发展都来源于原先公理体系的矛盾性,发现矛盾性或者找的发现矛盾性的方法可以极大地加速新理论的创立,然而并不能这样
现代结论:哥德尔不完备性定理
思考:图灵喜欢拿哥德尔不完备性定理的证明方法证停机问题玩…

希尔伯特第三问题

描述:两个同体积多面体,是否一定有方法将第一个分割后结合成第二个
哲学意义:明显地向化圆为方致敬?只不过是两个方了zz
现代结论:不可以
思考:为啥空间是三维的?看希尔伯特第三就知道~(吗?直觉上感觉2和3是神奇的数字,二维空间和三维空间结构上有很多明显的不同,但或许只是因为理解四维空间比较困难,说不定三维和四维之间也有很多不同zz开始人择原理了

希尔伯特第四问题

描述:什么是平面,以及为什么两点之间直线最短?什么是最短?什么是距离?
哲♂学意义:量度方法
现代结论:额..希尔伯特想问啥?
思考:我觉得他大概想问怎么定义范数,哪个范数是自然的定义。感觉定义范数的方法多姿多彩,但要定义什么是“自然”的就会打起来;还不如像碰到emacs和vim哪个好一样不要发表意见

希尔伯特第五问题
描述:李群是不是光滑流形
哲学意义:这和李群的定义有关吧
现代结论:解析李群和光滑李群是同一个东西
思考:好像有直接用光滑性定义李群的…

希尔伯特第六问题
描述:物理是不是可以公理化
哲♂学意义:康德告诉你,就是不可以,实验是不能证明真理的,因为存在可能的黑天鹅效应,但实验可以检验(假设的)公理,所以相信拉普拉斯的话就直接拿来用好了;
现代结论:古代就有了(虽然可能纯属歪打正着
思考:如果把没有观察到反例当作事实的话说不定可以;这是一个信仰的问题,我是相信拉普拉斯的用概率代替上帝的行为,并反对拉格朗日,当然也有很多人相信拉格朗日…好像爱因斯坦就讨厌概率论来着?理性宇宙设计是很多物理学研究的默认假说,但要是真的物理定律实际上是一大坨乱糟糟的东西怎么办?人类认为的优雅和实际上的优雅可能有很大差别吧…虽然是信仰,自大到自以为宇宙的设计者和人类对简洁的定义一样可不太好,弦论什么的只需要一个反例就变成纯数学玩具了

希尔伯特第七问题
描述:代数数的无理次幂是不是超越数?
哲学意义:额
现代结论:是的,数学分析课上会提到的样子
随着世界的发展之前的未解之谜都会变成常识的样子zz

希尔伯特第八问题
描述:黎曼猜想、哥德巴赫猜想和孪生素数猜想
哲学意义:额
现代结论:额
感觉黎曼猜想需要有一种新工具才能解决,而且1/2让人充满遐想~~总觉得和空间拓扑结构有关;哥德巴赫猜想感觉被布朗引导了歪路上,个人不认为陈氏定理可能对哥德巴赫猜想的证明有什么帮助…椭圆曲线可能是个方向,但也有可能是另一种结构;孪生素数H已经可以做到6了(如果你相信爱骆驼-嗨波丝毯的话),证明的也有246了…结果会不会像布朗的那条路这样就不知道了额,感觉毕竟是不同的手段不同的问题不应该悲观,但数论里面证明4到2用从头到脚都不一样的方法太多了,所以也难说,陶哲轩好像也没能用这套方法做到2,应该这条路会有点困难;这三个问题直觉上让人觉得联系很紧密,数论问题很多最后都是数学结构的基础的问题(素性),感觉有种在研究真理的感觉zz但说不定实际上是三个完全不同的东西呢

希尔伯特第九问题
描述:最通用的互反律是啥
哲♂学意义:二次互反确实是一个极其巧妙的思路..突然想吐槽高斯的算术探索的中文翻译真有点…
现代结论:阿呆搞出来的代数域下的基本上可以算是解了?但in any number field的通用解..要不吃点药?
要对数域的集合有足够的认识才能在这个问题上找到解吧,特别是数域和数域之间的关系…代数是人类发明的最难的几个学科,但也是唯一几个感觉接近真理的;在人类连数域的定义都没有完全搞明白(希尔伯特第一问题)的情况下不是特别好直接开始研究in any number field的东西吧。。要一步步来吧。。不过可能启发对数域的定义倒是事实;但感觉已经被阿呆洗脑严重想不出有什么更好的集合定义方法了zz

希尔伯特第十问题
描述:解丢翻图方程的算法
哲学意义:无穷的有限化
现代结论:并不能
额 传统的计算理论历史背景资料?课上肯定讲过?虽然是上午的课全睡过了

希尔伯特第十一问题
描述:二次型的解
哲学意义:额 不懂
现代结论:好像可以用局部分析来推
不是特别理解,感觉也是和素性相关的问题,但看证明又和第八问题里面的那种素性联系不起来zz没想明白,到时候可以再想想

清朝的贰臣与舆论

作者:单春晴

我,单春晴( 身份证号330105199412120020 )授权任何人转载部分或全部的内容,并授权转载时对内容进行任意修改。转载时可自由选择是否注明出处和作者,但不可标注除单春晴以外的任何作者

任何一个时代,一旦有改朝换代,就有一批前朝遗民和两朝贰臣。上至商周,下至民末,无不如此。而在任何一个时代,对于这些贰臣的舆论,总有一个逐步转变的过程。商周以来到唐宋时期,留存资料较少,现存资料很难有效地反映当时的整体社会思想舆论变化;而民末一事,很多事物至今都不能算是盖棺论定,所以也很难做一个客观有价值的评判。而在明末清初,社会档案留存已经足够丰富,其变革距今也足够久远。可以说是研究贰臣在舆论上的地位这个问题的最佳时代。

贰臣的形成,是改朝换代的历史必然。没有任何一次改朝换代不存在贰臣;而贰臣的产生原因,历朝历代基本一样,不外乎是顺应天下潮流,维护自身利益而已。从商代微子的“肉袒面缚,左牵羊,右把茅,膝行而前以告”,到明末士人着囚服在午门外迎降,从形式到实质,基本都可以说是换汤不换药,可谓是“年年岁岁臣相似”。而对贰臣的舆论,则会根据朝代的不同而有很大的差异。有的朝代会一直维持贰臣的仁人地位,典型的就如刚提到的微子;而大多数朝代,对贰臣的舆论会根据政治需要而有多个不同的阶段,有的以不提告终,有的则编贰臣传加以贬斥,可谓是“岁岁年年论不同”。从征伐时的顺天命,到建国的不提,再到之后的重现审视以至于贬斥。这个过程,基本上是和当前政府的政治需要紧密相关的。而其中最典型、过程最完整的一个朝代,便是清朝。

在讨论清朝之前,先以第一人称视角看看一个拥有无限舆论控制力的虚拟朝代,贾朝,在建立时需要经过一个什么样的过程。首先,贾朝的创立者还在逐鹿中原之时,对手的臣子投降,必然的选择是倒履相迎,同时加以提防。这个时期,投靠者多多益善。故而,贾朝的统治者会希望这个时期对贰臣的社会舆论是宽松的,以“凤鸟择枝而息,良臣择主而事”为主的,此时也是贰臣言论最为自由的时候。过了十几年,贾朝的开国皇帝把天下打下来了。在此时,要做的首要任务是防备前朝遗臣反扑和割据势力造反。而那些贰臣,则是防备这两者的最佳选择。贰臣之所以能成为贰臣,要么有才,要么有兵,要么有名。没有才,没有兵,没有名,就算投靠,贾朝也不会重用。在这个时期,有才的贰臣为贾朝建立文治武功,加强贾朝的政权根基,动不得。有兵的贰臣随时可能黄袍加身,必须好吃好喝供着。有名的贰臣是天然的劝降官,即使他不愿为贾朝和前朝之间提供沟通的纽带,其归顺本身就是一个足够有用的符号,只要他不乱讲话,就要好好对待。所以,这段时期的宣传应该要宣传忠君,但也要宣扬贰臣。而将这两者融合,就叫“顺天命”,此时如果还有不太可能敌对的敌对国、稍有可能造反的敌对组织存在,很有可能还会做一些思想上的统一和镇压,如秦朝的焚书坑儒、清朝的文字狱就是思想上的镇压的极端例子。接下来,贰臣老的老,退的退。而贾朝也已经巩固。此时,让朝廷能够有效地为控制思想,稳定社会的传统儒家文化就会开始逐步成为社会主流。无论贾朝的创立者是遵循法家,兵家还是黄老,儒家文化忠君、维护现有秩序的吸引力都无法抗拒。此时,对于贰臣的评判,要么不提,而那些提起来的,也不需要过于客气。

故而,一个拥有无限舆论控制力的朝代,会控制社会舆论使其对贰臣的评价经历从宽松,到只提顺天命,再到逐步不提以至于贬斥的过程。成功的朝代基本都会经历这样一个过程,而很多失败的王国则会有一个在群雄逐鹿时便强调忠君的环境(如东周时除了秦齐以外的所有诸侯,三国时吴国等);这似乎说明了朝代的成功和对社会舆论的控制力有很大的关系。不过,不是每一个朝代都会经历这样一个完整的过程。像隋唐的贰臣,唐朝基本没有进行贬斥;不过这也有可能是唐朝从皇帝开始就不能算是个忠臣的原因(实际上,基本每一个以造反起家的朝代,都不太会贬斥贰臣;而每一个外族入住,基本都会在朝代中贬斥贰臣)。

清朝完整地经历了这样一个舆论的变化过程。

在清朝入关之前及入关之初,清朝一直有一个对贰臣比较轻松的环境,贰臣能够活在比较宽容的社会环境中。无论是终身不仕的前朝遗民还是当权的清王朝,对贰臣都没有太多关于忠贞与否的评价。而到了平三藩之时,社会对贰臣的评价一般是以顺天命为主,不太做过于深入的讨论。到了乾隆朝,贰臣寿数也都到了,清王朝通过修订贰臣传,以及进一步深入宣传儒家忠君的思想,将贰臣的行为进行贬斥。这一个过程,有很多原因。社会上,经济上,政治上的原因都有。根据先贤的说法“黄狸黑狸,得鼠者雄”。虽然舆论是各方势力共同决定的,但此时的得鼠者,估计只能是清廷。无论是贰臣还是遗民,估计都不太愿意在《贰臣传》中被人提到。不过舆论到了清朝的“国家利益”之前,似乎也不能做什么抵抗。

清廷可能是用经济的力量无为而治,也可能是通过修书来有为而治,这些都不重要。重要的是,自始自终,清朝都没有做出与自身利益相悖的事。整个舆论的形成,是多方的力量博弈。而清朝,则做了一个理智的选择者的角色。在每一次利益变化之中,清朝政府都选择了一个适合自己的舆论环境。可能清朝没有像现代美国政府那样高端的监控手段,但在对舆论的控制这个问题上,清朝证明了其实通过影响来控制舆论,并不需要复杂的监控设备——只要不要作死就好了;而这是很难得的。在中国历史上,有无数的朝代在舆论上选择自杀,而有些诸侯,甚至至死,都不知道自己是死于自杀。

以史为鉴,可以知兴替。其知,在于知兴者有知。舆论的控制,是兴者的一个重要技能。甚至只靠舆论的力量,一个政党,诸侯就能够兴起。纳粹,苏联,美英,都有极其强大的舆论领导能力;而法国,在德国进攻之前,舆论分为两派,一派追求国际主义,对德国的不公境遇充满同情;一派追求武力,认为只要马奇诺防线依然屹立法国就能高枕无忧——知兴替者,就在于此。未来的历史是不是还会重复这样的轨迹呢?

圣经说,太阳底下没有任何新东西。

Now secured with DNSSEC

In the last few days, all my web services have been secured with DNSSEC. I have used DNSPod for some time and am pretty satisfied with their service, but after some incidents of failing to resolve for foreign places, I decided to change my DNS service. So my DNS service has been changed, and also secured with DNSSEC.

DNSSEC is a chain of trust service that authorization each DNS reply using asymmetric encryption. It starts from the top-level CA, which is “.”, and then some gTLD, like “org.”, and then the register’s domain. It’s a signing only method, so the DNS request is not encrypted and can be cached. The weakest point is that your domain registrar has total control over the DNSSEC key so that if your domain registrar wanted to change it to another thing, it would be done. Also, the encrypted key of “.” and “org.” is both 1024 bit RSA, so there may be some possibility to break it using a really big supercomputer within expire time. (there is about 1.47% possibility that you can break a 1024bit RSA key using Tianhe-2 under six month)

It’s a good way to prevent DNS poisoning. With DNSSEC, the most respectable mail service(Google) will not be fooled by easy tricks to send the email to some MIMA server. Also, if the client’s DNS service is secured under DNSSEC, the client will not be fooled to another site.
However, there is little ISP that does the right DNSSEC check inside China. One famous DNS provider inside china, 114DNS, has exactly zero aware of DNSSEC. And if the DNS record is signed with the wrong key, the 114DNS will not care and just return the malicious result.
So I set up three DNS servers to do the right DNSSEC check. One for my personal network(mail/VPCC/wiki/gitlab/backup/LDAP/WebDAV…) and another for my personal VPN. The two DNS servers using another DNS server as a cache. Now the weakest spot is that before I start my VPN, the DNS is poisoned. However, as my VPN is secured using another set of RSA keys, and I never visit anywhere without my VPN on, it should be fine.

With DNSSEC, I can now have my keys published using DNS. Now my GPG key for [email protected] can be auto-fetched if the DNS search is enabled. The weak point is that the DNS search function is not capable of verifying DNSSEC at peer but relies on the remote resolver. RFC4035 seems to be suggesting any client with the ability to check DNSSEC to check DNSSEC by itself. I believe GnuPG is a client that has the ability to check DNSSEC and should have checked DNSSEC. Without that function, anyone can just modify the UDP package between the resolver and the client to give the client any key the attacker likes. A temporary solution would be setting up a DNSSEC capable resolver at the localhost and dig from 127.0.0.1:53.

Whatever, having it is better than having nothing. But still, if you want to send me encrypted emails, see about page on this blog and using keys there, or make sure you are doing DNSSEC check at localhost…

Perfection is death

Being perfect is good. But trying to be perfect is just a death sentence to anyone.

There is no perfection

In the theory world, there is a top for anything, and you can reach perfection just by spending enough. It’s always true that a project’s quality is linearly boosted as time spent. However, it’s not. Just like speed, you can reach a certain speed easily by accelerating for a certain time, but if you want more speed, more accelerating time/energy is just useless. You can never reach c even if you spend an infinite amount of time and an infinite amount of energy. It’s the same in any project. You can get to a certain quality level with a certain amount of time in the beginning. However, no matter how long you spend, it’s never perfect.

We use the backup project as an example to explain it in detail. First, we define the perfect state of a backup project:

  • No one can access the backup data except the owner
  • The owner will never lose any useful data because of the backup

First, it’s something that can be easily done. You write a script to diff the data, divide data into small s3 objects, GPG encrypt it and sign it, then send it to Amazon Glacier. Just some lines of script, easy.

But when you put it into your crontab, you find something is missing. It’s not a perfect backup scheme. The data can be lost if you accidentally deleted it when you are between the backup cycle. It’s not tolerable! But you can still solve it. So you write a service, and then go into your kernel source tree, open the fs/open.c, patch the kernel, restart the system, and find not all calls are good. So change more sources, patch the kernel, restart the system, and again, and again…

You think you have a perfect solution now. Every time you write the file, it will immediately transfer to Glacier; Even before the file reaches the disk from the cache, it has already safely in the cloud. No way to lose data now.

But the problem can always arise. It’s still a long way to perfection. What if Amazon bankrupt? Easy, add the backup to Aliyun; What if your backup GPG key is lost? Print the encrypted version and post it anywhere; What if the network is down? Write another service to do a watchdog job and beep loudly whenever a backup fails. Beep is of course, not perfect. You need to have two private network lines to Amazon and Aliyun just to provide stable networking, so you buy AWS Direct Connect and some fuck network setup for Aliyun. But it can still fail, so you build an automatic program to call Amazon and Aliyun to fix the private line when it finds the line is broken.

Yeah, you have a perfect backup solution. But no?

It’s still far, far away from being perfection.

What if RSA is not secure? You need a private asymmetric encrypt method to make sure it’s safe(I use VXEnc~). What if your important idea is lost when typing in TTY? Patch kernel again and add keystream backup. What if kernel panics? Rewrite the kernel to perfect so that to make it never panic.

But it’s still far away from being perfection.

You still need to write a git-like branch system to manage the backup-restore history, you need to store every object’s travel history, and you need to ensure the network is good once again. Add another several providers. And you need a local offline copy, so you build a service that’s just like Glacier. You need perfection, and Earth has a possibility to nuclear war(0.7% for average given year, it is said), 0.7% data loss rate? Not tolerable! So you need to build the world’s biggest rocket launch station to send out backup copy in real-time as you save a file. But it still needs much more improvement to keep it secure in space.

 

You see, it can never complete.

 

I spent about 2 hours to finish the first step, but much more time has been spent since then, and I have never finished all the things on the list yet. I believe much more can be done, just to make the simple two requirement successful:

  • No one can access the backup data except the owner
  • The owner will never lose any useful data because of the backup

I developed a feeling that even all human beings spend all their life just trying to finish such a simple backup task perfectly, they will fail. Even if all human generations, one after another, spent infinite time on this simple data backup project, they will not achieve perfection.

There is no perfection.

 

There can always be perfection

Though in reality, there is no perfection, you can always find some better ways for anything. You can always find something you can do to make your project better. Since there is the internet, you can receive far more information than your ancestors. They may live in a dreamland that they have done everything perfectly even if they can’t be sure whether or not their house can stand over the next storm, but you can’t. You will always receive information about how to make something better. That information tends to make you believe it’s easy and simple to build a better place. Your knowledge is improved than your ancestors, and your ability enables you to do things that will help your project to perfection. And your brain refuses to believe anything is finished until it is perfection.

The smarter you are, the harder to lie to your brain. If you are good enough, you may find all the things that you have joined are marked as undone.

Modern lifestyle is a helper for this crisis. In the good old time, you can know when you finished work. When you make bottles for sale, you make bottles, even though they are imperfect, you will not spend time to think that you should rob it from your customers to make it more perfect. When the bottles are out of your hand, it has finished, no more headache.

But modern days, you are a worker with multiple projects. You can not finish a part of the project and marked it as done. As you can always make changes to that part, you will always try to make it perfect. As long as you have access to that part, it is never marked as done.

As a human, you will have the Zeigarnik effect whenever there are things undone. When all things are never done, you will be mad. Everyone feels that madness in modern society. People want to do things, but they can’t, as there are many other things to do. They want to do A, but there are BCDEFGHIJ; They want to finish B, but there are ACDEFGHIJ, and much more clearly shined in their brain than B because of Zeigarnik effect. They decide to finish J first, but their brain keeps thinking of ABCDEFGHI. They decide to start a perfect timetable with a perfect J, and J will never finish as there is no perfection.

In the end, they finish nothing.

But still, ABCDEFGHIJ is in their brain. They need to do it. So they browser the internet trying to find something for B and find a good way to solve part of C, they did it, and remember B is not even started. Guiltily, they close the computer, see the To-Do list, and find the H, trying to do it in 5 minutes, and mobile phone rings.

Do you ever have the feeling that you have done nothing after a tiring day?

Don’t you?

Henry Ford invented assembly lines to save the worker from low efficiency. Some textbook says assembly lines improve efficiency by letting everyone do the repeated task. However, it’s not entirely true. Assembly lines improve efficiency by letting workers forget about their previous product and focus on the current one. An experienced car master can easily build a car from raw metals if he wants, but even in every detail he is more experienced than assembly workers, he will never reach 1/5 efficiency of a man in an assembly line. He can build a car in 10000 hours with all the tools a worker has, but 1000 workers can do the same thing in 1 hour.

It’s not because he is not experienced. Even the assembly line is filled with fresh new workers. Everyone can be much more efficient than the lonely car master.

It’s because he can touch his product even when a part is finished.

The only solution to this problem is a Freeze and GTD lifestyle. For every single project, it should be a test, which tells you whether the project is finished. If a test is passed, even your guts tell you the project is in a mess, and you should never touch the project again. It’s finished. Not only so, but it’s also frozen. In a preset period, you shouldn’t do anything to improve the project even if you do want to improve it. Do a new project after the period if you still remember the project. But never think of the project when it is finished, as it will never be on your list again.

Have you heard it somewhere? It seems familiar? Yes, it’s TDD. You write more production code every day (exclude test) in TDD is not because your time is magically doubled, it’s because your code can be anything, ANYTHING, as long as it passes the test. Whenever some code passes the test, you will not and should not review it. It’s a way to fight Zeigarnik effect, just like the assembly line.

If you can always focus on your topic, you will have 5~10 times performance boost. It is verified data. Assembly lines make workers focus, and 10x performance is seen. Good TDD makes programmers focus, and for some programmers, 100x performance is seen. You can also have this performance boost happen in your daily life, just do like you are in an assembly, and you will be fine.

 

danger to HTTPS, doom to SPDY

Since the BREACH attack, it seems that there is no way to transport content securely in the HTTP world.

The BREACH arrack is an HTTP version of CRIME, which recovers encrypted messages by analyzing the compress ratio of different media. It is well-know that people can see distinct pictures from the text by the compress ratio; however, before CRIME, there is no easy way to detect what exactly the information is by the ratio only. But the breach always exists. The word “faster” and “sunoru” have the same length. However, the entropy(binary) of “faster” is 2.58496, and the entropy of “sunoru” is 2.25163. So, if you know the original length(6) of the words, and also get access to the entropy of the words, you can easily obtain rich information from the results. For a “perfect” compress algorithm with an observe-only way to get information, you can get how much time different alpha is included in each word, which, generally, is not so useful(But shouldn’t be public even so). But real-world compress algorithm is NOT perfect, and real-world environment is NOT observed only. You can send a message to the server to determine which real-world compress method the server is using, and you can obtain much more information form the simple ratio if multiple requests are made by the CRIME attack.

For HTTPS, it represents a danger for web pages with simple information. For example, some banks in China using a number in a picture to show how much money you have, when the picture is compressed, it is pretty easy to obtain the real number the picture shows by compress ratio. By using a precomputed table, you can decrypt millions of those “money pictures” per second with a Macbook Air. So if you find your bank is transport money number in the picture, you should be aware it may be a deliberate way to publish that information to the whole net.

However, for SPDY, your app may be cracked even without deliberate setups. SPDY’s speed is based on compressed headers, which include URL, cookie, and authorization token. As the client will send the header wherever people visit the same site, you just need to XSS the client to a static page(e.g., a 404-page ~), then you can obtain all the information in the header without any painful struggle. And when you get the header, you get the URL(so the complete browsing history is public), the cookie and authority token(so the log-in status of the personal), and all the content of the page. So, it’s just like that you are visiting the page using HTTP without S.

Not only HTTPS and SPDY are effected, Tor, which uses gzip as it’s compression algorithm, is also affected. But it may be not so easy to crack Tor as it reuses TCP tunnel… SSH with compress can also be decrypted this way. However, it needs some small skill and luck to do the gzip guess as you cannot easily make the user resend things.
In conclusion, SPDY is just like clear text for a careful attacker, and HTTPS is not so secure anymore…

The good news is that the network working group finally finds the danger in compression, and decides not to support compression any more in TLS 1.3 draft-02. Have I said that is good news? It seems not like a pleasant change for those who only have limited network bandwidth…

HTTPS SNI

SNI means Server Name Indication, which is a technology to let the server know which domain the client is linking to and return the certification correspondingly, which makes a single IP possible to serve multiple HTTPS sites. It is defined in RFC 6066 section 3.

The protocol extension changes the handshake process in the TLS. The client should include a struct array of the DNS name of the server the client wants to link to. And if the server has the certification, the handshake goes on normally. If not, the server should send a fatal level error and drop the connection, or just go on as if nothing happened(and give out the default certification).

The protocol also influenced the session cache of the TLS server. The TLS server which supports the extension will never give out any session to the client if the server_name mismatches. Even if the client has all the outer things qualified.

Some people think that SNI will add security risks as the client will transport the server name in cleartext. However, if a site is a TLS site(without SNI), anyone can know who the client is talking to by linking to the server. Essentially means the IP in traditional TLS servers gives out the information of the domain. Telling the domain will not add security risk to the protocol.

In fact, as the protocol provides another way to check session cache, it actually reduces the risk(though seems impossible&useless already in traditional TLS server) if the server uses the wrong TLS session which is opened by an attacker to send message to the user.

Now lab to 6.5

After altering some files in gitlab, the upgrade process becomes not an easy and happy job. Every new version comes out, dozens of files need to merge manually in order to upgrade gitlab successfully. So, after hours of mental struggle, I finally decide to upgrade it. The process is not as terrible as I thought it would be. But still, DOZENS of files to edit……

And now the update process has been finished. All things seem to be good. If anything went south, please email me~

Is Meg Jay Right?

In Meg Jay’s New York Times article “The Downside of Cohabiting before Marriage” publishes on April 14, 2012, the author suggests that cohabiting may not be a good factor in marriage like many people assume, actually, it may enlarge the possibility for couples to divorce after marriage. She argues that cohabiting couples may just slide into marriage without serious conversations about why they should live together, and, unfortunately, people’s standards of a live-in partner are lower than their standards of a spouse in most cases, which leads to unhappiness after marriage and therefore enlarges the risk of divorcing. Meg also suggests that people may have different views toward cohabiting: Women are more likely to think cohabiting as a step towards marriage, while men are more likely to see it as a way to test a relationship. These asymmetry ideas may lead to low quality of understanding and may eventually lead to the break of a marriage. She argues that cohabiting is filled with high switching cost, which may make people be “locked in” by cohabiting, and miss their true love because of it. Finally, Meg concludes that because of the high risk of cohabiting before marriage, young people should discuss the commitment level and motivation before sliding into cohabiting to prevent the cohabitation effects.

Unfortunately, there aren’t many real examples in Meg’s article, and the examples Meg gives in her article do not support her conclusion solidly. Firstly, she suggests that there are some risks lie in cohabitation itself, and gives examples which show that heedless cohabitation which leads to unhappy life and eventually leads to break up of the relationship. However, all those examples only suggest that a heedless relationship will end badly, which is a common knowledge. So that those examples are not incontrovertible evidence of the risks lie in cohabitation. She also mentions in her article that cohabitation is loaded with switching cost, which makes it difficult to break up and finds a more suitable partner. But in fact any close relationship will bring switching cost, and will make people have a hard time to make right choices. It is true that cohabitation is hard to break up, but breaking up a marriage is even harder. In this case, I believe marriage is even more dangerous than cohabitation. The author assumes that a never-breaking marriage is the ultimate goal. However, this is a false supposition. There are many stories about unhappy couples who live together for lifelong time. They waste all their life to endure each other, and miss all the opportunity to find a better partner. It’s more tragic than those who divorce and then find a better partner. So that I think a right partner is much better than an unbreakable marriage.

As for the statistic, she suggests that there are some researches which show that couples who have cohabiting experience have a higher divorce rate than those who have no cohabiting experience. However, she fails to give us the exact numbers. But according to a longterm research carried out by U.S. government which has a sample base of 22682 people, the couples who have cohabitation experience have a divorce probability of nineteen percents, and the probability of divorce for those who did not have cohabitation experience is twenty percents. So, according to this research those couples who cohabit before marriage are not more likely to get divorce. Because of the fact that most cohabiting couples are more open-minded compare to those who have no cohabiting experience, they are more open to choose divorce if their marriage doesn’t work out. So the lower possibility of divorce actually suggests that couples who cohabit before marriage have a better marriage quality than couples who do not. And there is indeed a research that shows cohabitors who marry report greater happiness, fewer disagreements, and less instability in their unions and are more able to resolve their relationship conflicts through nonviolent means. So that I believe that cohabiting experience may help people live a better life after marriage.

In her article, Meg Jay has given us some evidence which cannot fully support her ideas. The real world statistics also suggest that cohabitation may have a good effect on marriage. Therefore I believe “Cohabitation Effect” only exists on some special clients of Meg Jay. For most other people, cohabitation actually has a good effect.